url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/9026 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9026/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9026/comments | https://api.github.com/repos/huggingface/transformers/issues/9026/events | https://github.com/huggingface/transformers/issues/9026 | 761,052,718 | MDU6SXNzdWU3NjEwNTI3MTg= | 9,026 | Compatibility scripts | {
"login": "bendboaz",
"id": 28871755,
"node_id": "MDQ6VXNlcjI4ODcxNzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/28871755?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bendboaz",
"html_url": "https://github.com/bendboaz",
"followers_url": "https://api.github.com/users/bendboaz/followers",
"following_url": "https://api.github.com/users/bendboaz/following{/other_user}",
"gists_url": "https://api.github.com/users/bendboaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bendboaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bendboaz/subscriptions",
"organizations_url": "https://api.github.com/users/bendboaz/orgs",
"repos_url": "https://api.github.com/users/bendboaz/repos",
"events_url": "https://api.github.com/users/bendboaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/bendboaz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! Thank you for your proposal. This, however, sounds like a colossal amount of work for limited gain - we try to keep the breaking changes across versions to a minimum. I understand that these do happen from time to time, but only for very good reasons that are thoroughly discussed beforehand.\r\n\r\nWould you be able to share what breaking changes have impacted you, and have been a bit too hard to overcome/not documented enough, preventing you from upgrading? Understanding this will help us do better in the future.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | # 🚀 Feature request
Scripts that translate code in older transformers versions into equivalent code that is compatible with newer versions.
## Motivation
After talking with several other people in my research groups, compatibility issues and getting stuck with old versions have turned out to be pretty common problems. Seeing as many of the things preventing backward compatibility are syntactic (slightly different interfaces for tokenizers, different file paths) I thought it might be possible to add scripts to the package, which translate code to fit, say, one major version higher up (then if a user wanted to step up multiple versions, they could just run several scripts in sequence).
Some rudimentary usage example:
`python transformers-2.11-3.0.py my_project/*`
## Your contribution
I could try to implement such an example scripts, but it would probably take me months and result in a sub-optimal output due to my knowledge about python parsing and the interface changes between major and minor versions of transformers.
I did find out a [refactoring](https://github.com/python-rope/rope) library for python, or some snippet showing how to [unparse a python AST](https://svn.python.org/projects/python/trunk/Demo/parser/unparse.py).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9026/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9025 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9025/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9025/comments | https://api.github.com/repos/huggingface/transformers/issues/9025/events | https://github.com/huggingface/transformers/issues/9025 | 761,008,150 | MDU6SXNzdWU3NjEwMDgxNTA= | 9,025 | Untranslation of some words from an external dictionary | {
"login": "Dmitry-Sn",
"id": 43182156,
"node_id": "MDQ6VXNlcjQzMTgyMTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/43182156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dmitry-Sn",
"html_url": "https://github.com/Dmitry-Sn",
"followers_url": "https://api.github.com/users/Dmitry-Sn/followers",
"following_url": "https://api.github.com/users/Dmitry-Sn/following{/other_user}",
"gists_url": "https://api.github.com/users/Dmitry-Sn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dmitry-Sn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dmitry-Sn/subscriptions",
"organizations_url": "https://api.github.com/users/Dmitry-Sn/orgs",
"repos_url": "https://api.github.com/users/Dmitry-Sn/repos",
"events_url": "https://api.github.com/users/Dmitry-Sn/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dmitry-Sn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @Dmitry-Sn,\r\n\r\nIt would be great if you could ask these kind of questions on the forum: https://discuss.huggingface.co/ . We try to keep github for issues and less for user-specific use cases. Thanks!",
"> \r\n\r\nHi, have you found some way to solve this problem? I meet the same situation, some proper nouns can' t translated properly, and I want to keep it as its native format.",
"Hi @vpegasus! There were not great ideas, generally.\r\nAs far as I remember, I decided to try to finetune the model for a special token. In my task, it was simple - since I wanted to keep geographical names in the original language, I replaced only them in the training data with a special token (I didn't have time to check the effectiveness, since I left that company).\r\nColleagues also suggested a solution with special characters (it seems to separate the word with <> characters), but it worked poorly.\r\nAll this applies to the models on MarianMT."
] | 1,607 | 1,657 | 1,607 | NONE | null | I use some pre-trained translator models from your library (for example, Helsinki-NLP). During the translation process, I would like to leave some words untranslatable (for example, acronyms, toponyms or names) due to the presence of errors in their translation. I tried adding extra tokens and replacing these words with conditional tokens (<extra_id_0>), but this translation requires raising the num_beams parameter, which significantly slows down the translation. Unfortunately, I can't find any additional mechanisms to perform this task. Is it provided or creating it is a custom task? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9025/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9024 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9024/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9024/comments | https://api.github.com/repos/huggingface/transformers/issues/9024/events | https://github.com/huggingface/transformers/issues/9024 | 760,975,817 | MDU6SXNzdWU3NjA5NzU4MTc= | 9,024 | Use Softmax classifiering for run_glue.py example | {
"login": "rhit2020",
"id": 7211805,
"node_id": "MDQ6VXNlcjcyMTE4MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7211805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rhit2020",
"html_url": "https://github.com/rhit2020",
"followers_url": "https://api.github.com/users/rhit2020/followers",
"following_url": "https://api.github.com/users/rhit2020/following{/other_user}",
"gists_url": "https://api.github.com/users/rhit2020/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rhit2020/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rhit2020/subscriptions",
"organizations_url": "https://api.github.com/users/rhit2020/orgs",
"repos_url": "https://api.github.com/users/rhit2020/repos",
"events_url": "https://api.github.com/users/rhit2020/events{/privacy}",
"received_events_url": "https://api.github.com/users/rhit2020/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | Hi,
I want to do binary text classification and I'm adapting [run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py) script to my task. The current model uses a linear classifier and the predictions are not in the range of [0,1]. Could you please guide me on how I could use softmax classifier instead of linear classifier?
I added following code after the model is loaded but I get an error related related to loss function that I pasted below. Any suggestion on how to fix this?
`model.classifier = torch.nn.Softmax(dim=1)`
```
File "run_glue.py", line 300, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/transformers/trainer.py", line 775, in train
tr_loss += self.training_step(model, inputs)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1112, in training_step
loss = self.compute_loss(model, inputs)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1136, in compute_loss
outputs = model(**inputs)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/transformers/modeling_bert.py", line 1377, in forward
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 962, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 2468, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 2262, in nll_loss
.format(input.size(0), target.size(0)))
ValueError: Expected input batch_size (768) to match target batch_size (3).
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9024/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9023 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9023/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9023/comments | https://api.github.com/repos/huggingface/transformers/issues/9023/events | https://github.com/huggingface/transformers/issues/9023 | 760,974,526 | MDU6SXNzdWU3NjA5NzQ1MjY= | 9,023 | run_clm.py Issue | MODEL_FOR_CAUSAL_LM_MAPPING is None | {
"login": "marscod",
"id": 16926558,
"node_id": "MDQ6VXNlcjE2OTI2NTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/16926558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marscod",
"html_url": "https://github.com/marscod",
"followers_url": "https://api.github.com/users/marscod/followers",
"following_url": "https://api.github.com/users/marscod/following{/other_user}",
"gists_url": "https://api.github.com/users/marscod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marscod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marscod/subscriptions",
"organizations_url": "https://api.github.com/users/marscod/orgs",
"repos_url": "https://api.github.com/users/marscod/repos",
"events_url": "https://api.github.com/users/marscod/events{/privacy}",
"received_events_url": "https://api.github.com/users/marscod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, could you please provide your environment info as asked in the template? `transformers-cli env`",
"hey, any solution yet?",
"@LysandreJik I am having the same issue and this is my env:\r\n- `transformers` version: 4.1.0.dev0\r\n- Platform: Linux-5.4.0-1029-gcp-x86_64-with-debian-buster-sid\r\n- Python version: 3.7.6\r\n- PyTorch version (GPU?): not installed (NA)\r\n- Tensorflow version (GPU?): 2.3.1 (False)\r\n- Using GPU in script?: NO (using TPU)\r\n- Using distributed or parallel set-up in script?: ",
"I got a similar error trying to run `run_clm.py` on a TPU.\r\n` File \"/kaggle/working/transformers/examples/language-modeling/run_clm.py\", line 33, in <module>\r\n from transformers import (\r\nImportError: cannot import name 'MODEL_FOR_CAUSAL_LM_MAPPING' from 'transformers' (/opt/conda/lib/python3.7/site-packages/transformers/__init__.py)`",
"@Clickative any solution?\r\n",
"It seems you do not have PyTorch installed? `run_clm.py` is a PyTorch script.",
"I encountered this same error and followed the advice from @LysandreJik. I installed PyTorch using `pip3 install torch torchvision` and this resolved the issue.",
"On Kaggle TPU, the current docker seems to have old version of transformers and it reads from the conda environment, so new installs are not taken into account (try `pip show`). Even if docker is set to latest. The GPU docker seem to run a version with 0.9.3 of tokenizers and latest transformers need 0.9.4 which is another issue.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | When I use latest code of [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py), it raises the following issue:
```
python run_clm.py \
--model_name_or_path gpt2 \
--train_file path_to_train_file \
--validation_file path_to_validation_file \
--do_train \
--do_eval \
--output_dir /tmp/test-clm
```
```
Traceback (most recent call last):
File "run_clm.py", line 51, in <module>
MODEL_CONFIG_CLASSES = list(MODEL_FOR_CAUSAL_LM_MAPPING.keys())
AttributeError: 'NoneType' object has no attribute 'keys'
```
I checked and noticed that MODEL_FOR_CAUSAL_LM_MAPPING is None. Any suggestion? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9023/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9022 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9022/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9022/comments | https://api.github.com/repos/huggingface/transformers/issues/9022/events | https://github.com/huggingface/transformers/issues/9022 | 760,925,201 | MDU6SXNzdWU3NjA5MjUyMDE= | 9,022 | About the input of BERT | {
"login": "BeerTai",
"id": 29746659,
"node_id": "MDQ6VXNlcjI5NzQ2NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/29746659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BeerTai",
"html_url": "https://github.com/BeerTai",
"followers_url": "https://api.github.com/users/BeerTai/followers",
"following_url": "https://api.github.com/users/BeerTai/following{/other_user}",
"gists_url": "https://api.github.com/users/BeerTai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BeerTai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BeerTai/subscriptions",
"organizations_url": "https://api.github.com/users/BeerTai/orgs",
"repos_url": "https://api.github.com/users/BeerTai/repos",
"events_url": "https://api.github.com/users/BeerTai/events{/privacy}",
"received_events_url": "https://api.github.com/users/BeerTai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @BeerTai, \r\n\r\nIt would be great if you could ask these kind of questions on the forum: https://discuss.huggingface.co/ . We try to keep github for issues and less for user-specific use cases. Thanks!"
] | 1,607 | 1,607 | 1,607 | NONE | null | Hello, if I want to maintain two different dictionaries, one is BERT's original dictionary and the other is a custom dictionary, and then the input is `[CLS] BERT dictionary corpus [SEP] custom dictionary corpus [SEP]`, how do I handle the input of the model and what part of the source code do I need to change? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9022/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9021 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9021/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9021/comments | https://api.github.com/repos/huggingface/transformers/issues/9021/events | https://github.com/huggingface/transformers/issues/9021 | 760,859,624 | MDU6SXNzdWU3NjA4NTk2MjQ= | 9,021 | Error tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base") | {
"login": "trungtruc123",
"id": 42693060,
"node_id": "MDQ6VXNlcjQyNjkzMDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/42693060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trungtruc123",
"html_url": "https://github.com/trungtruc123",
"followers_url": "https://api.github.com/users/trungtruc123/followers",
"following_url": "https://api.github.com/users/trungtruc123/following{/other_user}",
"gists_url": "https://api.github.com/users/trungtruc123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trungtruc123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trungtruc123/subscriptions",
"organizations_url": "https://api.github.com/users/trungtruc123/orgs",
"repos_url": "https://api.github.com/users/trungtruc123/repos",
"events_url": "https://api.github.com/users/trungtruc123/events{/privacy}",
"received_events_url": "https://api.github.com/users/trungtruc123/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @trungtruc123, \r\n\r\ncan you try upgrading your transformer version?\r\nThe following code snippet (as stated on the model card: https://github.com/VinAIResearch/PhoBERT) works perfectly fine for me.\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModel, AutoTokenizer\r\n\r\nphobert = AutoModel.from_pretrained(\"vinai/phobert-base\")\r\n\r\n# For transformers v4.x+: \r\ntokenizer = AutoTokenizer.from_pretrained(\"vinai/phobert-base\", use_fast=False)\r\n```\r\n\r\nVersion:\r\n- `transformers` version: 4.1.0.dev0\r\n- Platform: Linux-5.4.0-1030-gcp-x86_64-with-glibc2.10\r\n- Python version: 3.8.5\r\n- PyTorch version (GPU?): 1.8.0.dev20201117 (True)\r\n- Tensorflow version (GPU?): 2.3.1 (False)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\n\r\nIt should also work with transformers 4.0.0.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | I run pretrain PhoBERT error .
ValueError Traceback (most recent call last)
<ipython-input-75-d17717702336> in <module>()
3
4 phobert = AutoModel.from_pretrained("vinai/phobert-base")
----> 5 tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
6
7 # INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
323 if tokenizer_class is None:
324 raise ValueError(
--> 325 "Tokenizer class {} does not exist or is not currently imported.".format(tokenizer_class_candidate)
326 )
327 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
ValueError: **Tokenizer class PhobertTokenizerFast does not exist or is not currently imported.** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9021/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9020 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9020/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9020/comments | https://api.github.com/repos/huggingface/transformers/issues/9020/events | https://github.com/huggingface/transformers/pull/9020 | 760,855,008 | MDExOlB1bGxSZXF1ZXN0NTM1NjEyMDU2 | 9,020 | Fix typo in modeling_tf_bart | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
Fix typo in `modeling_tf_bart`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@patrickvonplaten @sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9020/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9020",
"html_url": "https://github.com/huggingface/transformers/pull/9020",
"diff_url": "https://github.com/huggingface/transformers/pull/9020.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9020.patch",
"merged_at": 1607584973000
} |
https://api.github.com/repos/huggingface/transformers/issues/9019 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9019/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9019/comments | https://api.github.com/repos/huggingface/transformers/issues/9019/events | https://github.com/huggingface/transformers/issues/9019 | 760,745,213 | MDU6SXNzdWU3NjA3NDUyMTM= | 9,019 | getattr introduces bug when setting booleans with config file | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @rabeehk,\r\n\r\ncould you attach a code snippet that we can copy-paste to reproduce the error? Thanks!",
"Hi Patrick, \r\nI checked and in finetune_trainer.py you consider these parameters only which are all of type float:\r\n` extra_model_params = (\"encoder_layerdrop\", \"decoder_layerdrop\", \"dropout\", \"attention_dropout\") \r\n`\r\nIn this case the issue would not happen, but if one of these parameters were boolean, lets say \"A\", if the user pass a config file like below to `finetune_trainer.py`\r\n\r\n```\r\n//config.json\r\n{\r\nA: false \r\n}\r\n```\r\n\r\nand if the code tries to updating the value of \"A\" in the config file as line 162, then this introduces the bug of not setting A.\r\nFor now since the variables you consider are float, this wont happen, so please feel free to close the bug. Still safer to change 162 with` if hasattr(training_args, p):`\r\nthanks. \r\n\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | Hi,
In finetune_trainer.py, line 162: https://github.com/huggingface/transformers/blob/5e637e6c690e45d13ebf7296e1ea9dcc188d0f07/examples/seq2seq/finetune_trainer.py#L162
If the user is calling this script with a json config file and setting one of the attributes to false, then on line 162, the result of if `getattr(training_args, p, None)` would be false and if condition would not be called, this results in bug for setting booleans, could you change this line to following to resolve it:
`if hasattr(training_args, p): `
thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9019/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9018 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9018/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9018/comments | https://api.github.com/repos/huggingface/transformers/issues/9018/events | https://github.com/huggingface/transformers/pull/9018 | 760,716,083 | MDExOlB1bGxSZXF1ZXN0NTM1NTAxMDMw | 9,018 | Fix PreTrainedTokenizer.pad when first inputs are empty | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
Currently, `PreTrainedTokenizer.pad` errors when the first `input_ids` are empty (because it tries to guess the type of the tokens by looking at the first element). This PR slightly changes the behavior to loop until we find a non empty list.
Fixes #8674 (not the initial issue but the one mentioned at the end) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9018/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9018",
"html_url": "https://github.com/huggingface/transformers/pull/9018",
"diff_url": "https://github.com/huggingface/transformers/pull/9018.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9018.patch",
"merged_at": 1607700301000
} |
https://api.github.com/repos/huggingface/transformers/issues/9017 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9017/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9017/comments | https://api.github.com/repos/huggingface/transformers/issues/9017/events | https://github.com/huggingface/transformers/pull/9017 | 760,705,412 | MDExOlB1bGxSZXF1ZXN0NTM1NDkyMTU4 | 9,017 | Fix documention of book in LayoutLM | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
The documentation of the `bbox` argument in the LayoutLM models has some bad copy-paste errors, this PR fixes that.
Fixes #9016
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9017/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9017",
"html_url": "https://github.com/huggingface/transformers/pull/9017",
"diff_url": "https://github.com/huggingface/transformers/pull/9017.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9017.patch",
"merged_at": 1607610530000
} |
https://api.github.com/repos/huggingface/transformers/issues/9016 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9016/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9016/comments | https://api.github.com/repos/huggingface/transformers/issues/9016/events | https://github.com/huggingface/transformers/issues/9016 | 760,696,249 | MDU6SXNzdWU3NjA2OTYyNDk= | 9,016 | LayoutLM wrong shape for bbox in docs | {
"login": "dscarmo",
"id": 10614968,
"node_id": "MDQ6VXNlcjEwNjE0OTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/10614968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dscarmo",
"html_url": "https://github.com/dscarmo",
"followers_url": "https://api.github.com/users/dscarmo/followers",
"following_url": "https://api.github.com/users/dscarmo/following{/other_user}",
"gists_url": "https://api.github.com/users/dscarmo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dscarmo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dscarmo/subscriptions",
"organizations_url": "https://api.github.com/users/dscarmo/orgs",
"repos_url": "https://api.github.com/users/dscarmo/repos",
"events_url": "https://api.github.com/users/dscarmo/events{/privacy}",
"received_events_url": "https://api.github.com/users/dscarmo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sounds right, will fix."
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
(Colab, 09 december 2020, CPU runtime)
- `transformers` version: 4.0.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
documentation: @sgugger
## Information
LayoutLM documentation indicates that the shape for the bbox input is [B, seq_len]:

However, from the code (https://github.com/huggingface/transformers/blob/master/src/transformers/models/layoutlm/modeling_layoutlm.py#L103) bounding boxes are encoded in the form [tl_col, tl_row, br_col, br_row]. Therefore the accepted shape is [B, seq_len, 4].
## To reproduce
Reproduced on this colab: https://colab.research.google.com/drive/1ZRPKlX8-C41nYq3o6QVS68h1JAj1nQMI?usp=sharing
## Expected behavior
Better explanation in docs
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9016/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9015 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9015/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9015/comments | https://api.github.com/repos/huggingface/transformers/issues/9015/events | https://github.com/huggingface/transformers/pull/9015 | 760,686,801 | MDExOlB1bGxSZXF1ZXN0NTM1NDc2Nzc1 | 9,015 | MPNet copyright files | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
MPnet and the copyright PRs were merged around the same time, so MPNet does not have copyright in every files it introduced. This PR fixes that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9015/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9015",
"html_url": "https://github.com/huggingface/transformers/pull/9015",
"diff_url": "https://github.com/huggingface/transformers/pull/9015.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9015.patch",
"merged_at": 1607610578000
} |
https://api.github.com/repos/huggingface/transformers/issues/9014 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9014/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9014/comments | https://api.github.com/repos/huggingface/transformers/issues/9014/events | https://github.com/huggingface/transformers/pull/9014 | 760,652,467 | MDExOlB1bGxSZXF1ZXN0NTM1NDQ5MDA2 | 9,014 | Enforce all objects in the main init are documented | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
Some objects added by contributors or the team are regularly forgotten. This PR changes the script that inspects whether or not models are documented to encompass all objects in the main init (and adds documentation for multiple forgotten objects). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9014/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9014",
"html_url": "https://github.com/huggingface/transformers/pull/9014",
"diff_url": "https://github.com/huggingface/transformers/pull/9014.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9014.patch",
"merged_at": 1607619434000
} |
https://api.github.com/repos/huggingface/transformers/issues/9013 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9013/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9013/comments | https://api.github.com/repos/huggingface/transformers/issues/9013/events | https://github.com/huggingface/transformers/pull/9013 | 760,639,736 | MDExOlB1bGxSZXF1ZXN0NTM1NDM4NjI3 | 9,013 | [model_cards] Migrate cards from this repo to model repos on huggingface.co | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052129,
"node_id": "MDU6TGFiZWwxODM0MDUyMTI5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/High-Level%20feature",
"name": "High-Level feature",
"color": "f7c9a3",
"default": false,
"description": ""
},
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
},
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
},
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"@patrickvonplaten, unless we erase them from the history, it won't make git clone any faster.",
"@sgugger it might make a `git checkout` slightly faster. I don't think the model cards were ever an issue in terms of git performance though (8% of number of files in the repo, 7% of total size of the repo)\r\n\r\n",
"https://github.com/github/git-sizer is an awesome tool by the way\r\n\r\n```\r\n$ git-sizer\r\nProcessing blobs: 22486 \r\nProcessing trees: 25585 \r\nProcessing commits: 7749 \r\nMatching commits to trees: 7749 \r\nProcessing annotated tags: 33 \r\nProcessing references: 216 \r\n| Name | Value | Level of concern |\r\n| ---------------------------- | --------- | ------------------------------ |\r\n| Biggest checkouts | | |\r\n| * Maximum path length [1] | 135 B | * |\r\n\r\n[1] 030c0d2cdc80cf8dcf23a6ee55c20e979548a181 (refs/heads/master^{tree})\r\n```\r\n\r\nvs on datasets (cc @lhoestq):\r\n```\r\n$ git-sizer\r\nProcessing blobs: 9449 \r\nProcessing trees: 12905 \r\nProcessing commits: 1245 \r\nMatching commits to trees: 1245 \r\nProcessing annotated tags: 16 \r\nProcessing references: 110 \r\n| Name | Value | Level of concern |\r\n| ---------------------------- | --------- | ------------------------------ |\r\n| Biggest objects | | |\r\n| * Blobs | | |\r\n| * Maximum size [1] | 17.8 MiB | * |\r\n| | | |\r\n| Biggest checkouts | | |\r\n| * Number of directories [2] | 4.89 k | ** |\r\n| * Maximum path depth [3] | 16 | * |\r\n| * Maximum path length [3] | 231 B | ** |\r\n\r\n[1] 3fe8eaab7a337ea2a8d06daa5721fc5935ba3098 (75cafce7677d6f66c49c34e43cfbc425e1f50d30:datasets/anli/dummy/plain_text/0.1.0/dummy_data.zip)\r\n[2] 681c565a3d8f63535823a1d33438c2b76ba3c706 (refs/heads/master^{tree})\r\n[3] c94ea70f34be4ed2723fc1c647340792ba03879c (7cd045237bb77f3b32877d31aae87789ec57ffab^{tree})\r\n```",
"**Update**: I deployed the new buttons/call to actions on the model pages. \r\nI also created a new Forum topic (@sgugger @Pierrci @patrickvonplaten) titled \"Model cards\" where users can suggest edits or creations of existing model cards, in case they don't have write access to the corresponding model repo:\r\n\r\nhttps://discuss.huggingface.co/t/about-the-model-cards-category/2777",
"All out-standing model card PRs were merged. No more model cards PRs expected!\r\n\r\nWill migrate existing ones now."
] | 1,607 | 1,607 | 1,607 | MEMBER | null | Fellow reviewers/contributors, please take a look at the documentation part and let me know your thoughts.
---
#### ⚠️ Still to-do before merging ⚠️
- [x] Post a message on the Forum: https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
- [x] Update the buttons on the model pages
- [x] merge all out-standing model card PRs on the transformers repo
- [x] the actual migration into the hf.co model repos
ETA: I plan on doing this Thursday (Dec 10) or Friday (Dec 11)! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9013/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9013/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9013",
"html_url": "https://github.com/huggingface/transformers/pull/9013",
"diff_url": "https://github.com/huggingface/transformers/pull/9013.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9013.patch",
"merged_at": 1607729082000
} |
https://api.github.com/repos/huggingface/transformers/issues/9012 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9012/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9012/comments | https://api.github.com/repos/huggingface/transformers/issues/9012/events | https://github.com/huggingface/transformers/issues/9012 | 760,593,918 | MDU6SXNzdWU3NjA1OTM5MTg= | 9,012 | "run_mlm_wwm.py", line 284 AttributeError: 'DataTrainingArguments' object has no attribute 'valid_ref_file' | {
"login": "NatLun137",
"id": 66668418,
"node_id": "MDQ6VXNlcjY2NjY4NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/66668418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NatLun137",
"html_url": "https://github.com/NatLun137",
"followers_url": "https://api.github.com/users/NatLun137/followers",
"following_url": "https://api.github.com/users/NatLun137/following{/other_user}",
"gists_url": "https://api.github.com/users/NatLun137/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NatLun137/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NatLun137/subscriptions",
"organizations_url": "https://api.github.com/users/NatLun137/orgs",
"repos_url": "https://api.github.com/users/NatLun137/repos",
"events_url": "https://api.github.com/users/NatLun137/events{/privacy}",
"received_events_url": "https://api.github.com/users/NatLun137/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Do you want to submit a PR?",
"Sure, I'll be happy to do it. I need permission... `remote: Permission to huggingface/transformers.git denied to NatLun137.`",
"Hmmm I think you just tried to push on `huggingface/transformers`? You should fork the repo, apply your changes there and then open a PR here. I see you created your fork already, how did you open a PR then? Did you use the GitHub UI?"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | Hi! There is a tiny typo in the code "transformers/examples/language-modeling/run_mlm_wwm.py" at line 284. It should be:
`if data_args.validation_ref_file is not None:` since at line 103 in `DataTrainingArguments` it defined as `validation_ref_file:` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9012/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9011 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9011/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9011/comments | https://api.github.com/repos/huggingface/transformers/issues/9011/events | https://github.com/huggingface/transformers/pull/9011 | 760,536,100 | MDExOlB1bGxSZXF1ZXN0NTM1MzUxNDMy | 9,011 | Create README.md | {
"login": "hailabpucpr",
"id": 55989936,
"node_id": "MDQ6VXNlcjU1OTg5OTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/55989936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hailabpucpr",
"html_url": "https://github.com/hailabpucpr",
"followers_url": "https://api.github.com/users/hailabpucpr/followers",
"following_url": "https://api.github.com/users/hailabpucpr/following{/other_user}",
"gists_url": "https://api.github.com/users/hailabpucpr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hailabpucpr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hailabpucpr/subscriptions",
"organizations_url": "https://api.github.com/users/hailabpucpr/orgs",
"repos_url": "https://api.github.com/users/hailabpucpr/repos",
"events_url": "https://api.github.com/users/hailabpucpr/events{/privacy}",
"received_events_url": "https://api.github.com/users/hailabpucpr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9011/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9011",
"html_url": "https://github.com/huggingface/transformers/pull/9011",
"diff_url": "https://github.com/huggingface/transformers/pull/9011.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9011.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9010 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9010/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9010/comments | https://api.github.com/repos/huggingface/transformers/issues/9010/events | https://github.com/huggingface/transformers/pull/9010 | 760,462,634 | MDExOlB1bGxSZXF1ZXN0NTM1MjkwMDQ4 | 9,010 | Reorganize examples | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"another bit - `examples/seq2seq/test_data` is also used by `research_projects/seq2seq` - perhaps symlink? ",
"Maybe a hard copy in that case, just in case the data changes/moves on the examples side.",
"> Maybe a hard copy in that case, just in case the data changes/moves on the examples side.\r\n\r\nThen need to check which specific sub-dirs are needed - if I'm not mistaken it's only `test_data/wmt_en_ro/`\r\n\r\nI'd still use a symlink to avoid git repo bloat and this can always be easily fixed if there is a divergence down the road.",
"Does this mean we no longer explicitly support pytorch_lightning? ",
"> Does this mean we no longer explicitly support pytorch_lightning?\r\n\r\nI would rather like it if we ditch the custom transformer trainer and just use lightning."
] | 1,607 | 1,609 | 1,607 | COLLABORATOR | null | # What does this PR do?
This PR reorganizes the examples folder by splitting it in two:
- `examples` that stay in this folder are the base example scripts maintained with the state of the library, expected to work on master. We accept PRs on them and will try our best to fix issues.
- `research-projects` are (often) more complex examples that we don't really maintain. They work on a specific version of the library (sometimes even a specific commit). We don't accept PRs on them except minor typo fixes *or* PRs from the original authors that want to bring an update to those scripts. Issues opened for those are probably less efficient than directly contacting the authors.
Each example/research project lives in a folder of its own, with its particular requirements in a `requirements.txt` file (instead of a global requirements file as before).
The seq2seq subfolder is less organized than the others, so I did my best to split its research project part from its example part. I made sure all tests are passing and duplicated the needed files, but @stas00 and @patil-suraj please tell me if you something obvious that I missed. We will leave the research-project part as is and clean a bit more the part in the examples in other PRs.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9010/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9010/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9010",
"html_url": "https://github.com/huggingface/transformers/pull/9010",
"diff_url": "https://github.com/huggingface/transformers/pull/9010.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9010.patch",
"merged_at": 1607699223000
} |
https://api.github.com/repos/huggingface/transformers/issues/9009 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9009/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9009/comments | https://api.github.com/repos/huggingface/transformers/issues/9009/events | https://github.com/huggingface/transformers/pull/9009 | 760,415,716 | MDExOlB1bGxSZXF1ZXN0NTM1MjUxMDY5 | 9,009 | fixes #8968 | {
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | **This is the same PR as: [link](https://github.com/huggingface/transformers/pull/8991#issue-534604029). I was asked to create a new one due to a merge mistake.**
# What does this PR do? (Text of the previous PR)
One of the 3.X releases introduced output objects that replaced the previously returned tuples. This PR updates the transformers notebook to reflect that update.
Fixes #8968
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9009/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9009",
"html_url": "https://github.com/huggingface/transformers/pull/9009",
"diff_url": "https://github.com/huggingface/transformers/pull/9009.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9009.patch",
"merged_at": 1607527301000
} |
https://api.github.com/repos/huggingface/transformers/issues/9008 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9008/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9008/comments | https://api.github.com/repos/huggingface/transformers/issues/9008/events | https://github.com/huggingface/transformers/pull/9008 | 760,364,108 | MDExOlB1bGxSZXF1ZXN0NTM1MjA4MzIw | 9,008 | [Docs] Fix some typos for group beam search | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9008/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9008",
"html_url": "https://github.com/huggingface/transformers/pull/9008",
"diff_url": "https://github.com/huggingface/transformers/pull/9008.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9008.patch",
"merged_at": 1607523274000
} |
https://api.github.com/repos/huggingface/transformers/issues/9007 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9007/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9007/comments | https://api.github.com/repos/huggingface/transformers/issues/9007/events | https://github.com/huggingface/transformers/pull/9007 | 760,362,976 | MDExOlB1bGxSZXF1ZXN0NTM1MjA3NDE2 | 9,007 | Fix link to stable version in the doc navbar | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
Currently the link to the stable version in the navigation bar of the docs does not work properly, this PR fixes that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9007/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9007",
"html_url": "https://github.com/huggingface/transformers/pull/9007",
"diff_url": "https://github.com/huggingface/transformers/pull/9007.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9007.patch",
"merged_at": 1607523100000
} |
https://api.github.com/repos/huggingface/transformers/issues/9006 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9006/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9006/comments | https://api.github.com/repos/huggingface/transformers/issues/9006/events | https://github.com/huggingface/transformers/pull/9006 | 760,344,524 | MDExOlB1bGxSZXF1ZXN0NTM1MTkyMDc0 | 9,006 | Diverse beam search 2 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ayushtiku5 I tweeted about it and tagged you on the tweet: https://twitter.com/PatrickPlaten/status/1336681238485229568 - hope that's fine for you :-) ",
"@ayushtiku5 can you please check if `HammingDiversityLogitsProcessor` and `PrefixConstrainedLogitsProcessor` can be speeded up with functions like `torch.scatter`, `torch.gather`, `torch.masked_fill`, `torch.index_fill`, `torch.index_add`, `torch.index_copy`? I believe there is room for improvement in \r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/generation_logits_process.py#L406-L409 and https://github.com/huggingface/transformers/blob/master/src/transformers/generation_logits_process.py#L471 (in the way like https://github.com/huggingface/transformers/pull/9557 and https://github.com/huggingface/transformers/pull/9600) but I have no experience using these processors to create good enough examples for speed testing and corner cases research."
] | 1,607 | 1,610 | 1,607 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Copy of #8627 because branch got messed up.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9006/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9006",
"html_url": "https://github.com/huggingface/transformers/pull/9006",
"diff_url": "https://github.com/huggingface/transformers/pull/9006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9006.patch",
"merged_at": 1607522437000
} |
https://api.github.com/repos/huggingface/transformers/issues/9005 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9005/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9005/comments | https://api.github.com/repos/huggingface/transformers/issues/9005/events | https://github.com/huggingface/transformers/pull/9005 | 760,276,539 | MDExOlB1bGxSZXF1ZXN0NTM1MTM1NDE5 | 9,005 | Add the code_search_net datasets tag to CodeBERTa model cards | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
TL;DR
Related to this PR on `huggingface/datasets`: https://github.com/huggingface/datasets/pull/1288
## Who can review?
@julien-c
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9005/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9005",
"html_url": "https://github.com/huggingface/transformers/pull/9005",
"diff_url": "https://github.com/huggingface/transformers/pull/9005.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9005.patch",
"merged_at": 1607524999000
} |
https://api.github.com/repos/huggingface/transformers/issues/9004 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9004/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9004/comments | https://api.github.com/repos/huggingface/transformers/issues/9004/events | https://github.com/huggingface/transformers/pull/9004 | 760,203,534 | MDExOlB1bGxSZXF1ZXN0NTM1MDc0MjEw | 9,004 | Add MP Net 2 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2669577093,
"node_id": "MDU6TGFiZWwyNjY5NTc3MDkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PR%20for%20Model%20Addition",
"name": "PR for Model Addition",
"color": "5319e7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"@patrickvonplaten Never mind, just use this PR is ok. I am ok if our work can be merged into the master quickly. ",
"@jplu @LysandreJik @sgugger , we all gave our thumbs-up in the old PR. It's a bit unfortunate that the authorship is slightly changed here, but the PR should be read to merge. ",
"Squashed commit & cherry picked on the `master` branch so that the authorship is kept in df2af6d. Closing.",
"@StillKeepTry thanks a lot for all of your hard work on this PR! Glad to welcome MPNet in the library!",
"Thanks every reviewer for helping review our work. :)"
] | 1,607 | 1,651 | 1,607 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Copy of #8971 that had to close because of problems with git history.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9004/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9004",
"html_url": "https://github.com/huggingface/transformers/pull/9004",
"diff_url": "https://github.com/huggingface/transformers/pull/9004.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9004.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9003 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9003/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9003/comments | https://api.github.com/repos/huggingface/transformers/issues/9003/events | https://github.com/huggingface/transformers/pull/9003 | 760,173,224 | MDExOlB1bGxSZXF1ZXN0NTM1MDQ4ODk5 | 9,003 | Turn attentions and hidden-states into a tensor | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After an offline discussion we decided to proceed differently. Then closing this PR."
] | 1,607 | 1,610 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
This PR turns the `all_attentions` and `all_hidden_states` values into tensors instead of a tuple. This update is to properly allow the dict outputs in TF serving, because the value of each key cannot be something else than a TF tensor.
Here a simple piece of code to produce the issue:
```
from transformers import TFBertModel, BertConfig
import tensorflow as tf
config = BertConfig.from_pretrained("bert-base-cased", output_attentions=True)
model = TFBertModel.from_pretrained("bert-base-cased", config=config)
tf.saved_model.save(model, "my_model")
```
Gets the error:
```
ValueError: Got a dictionary containing non-Tensor value (<tf.Tensor 'StatefulPartitionedCall:0' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:1' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:2' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:3' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:4' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:5' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:6' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:7' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:8' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:9' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:10' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:11' shape=(None, 12, None, None) dtype=float32>) for key attentions in the output of the function __inference_serving_15889 used to generate a SavedModel signature. Dictionaries outputs for functions used as signatures should have one Tensor output per string key.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9003/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9003",
"html_url": "https://github.com/huggingface/transformers/pull/9003",
"diff_url": "https://github.com/huggingface/transformers/pull/9003.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9003.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9002 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9002/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9002/comments | https://api.github.com/repos/huggingface/transformers/issues/9002/events | https://github.com/huggingface/transformers/pull/9002 | 760,170,678 | MDExOlB1bGxSZXF1ZXN0NTM1MDQ2Nzgz | 9,002 | Add TFRag | {
"login": "ratthachat",
"id": 56621342,
"node_id": "MDQ6VXNlcjU2NjIxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratthachat",
"html_url": "https://github.com/ratthachat",
"followers_url": "https://api.github.com/users/ratthachat/followers",
"following_url": "https://api.github.com/users/ratthachat/following{/other_user}",
"gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions",
"organizations_url": "https://api.github.com/users/ratthachat/orgs",
"repos_url": "https://api.github.com/users/ratthachat/repos",
"events_url": "https://api.github.com/users/ratthachat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratthachat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@jplu Thanks so much for your kind reviews! I will improve the code as you suggested.\r\n@patrickvonplaten I have confirmed that my TF and Pytorch have equivalent `generate` output on `num_beam=1` (greedy) on all (15) test cases . \r\n\r\nNevertheless, I just confirmed that the labels in the official test file is based on beam search where `num_beam=4` and this TFRag does not have `beam_search` yet since I am not sure whether I should wait for `tf_generation` refactor . \r\n\r\nIn the current Pytorch test, if explicitly set `num_beam=1` we will get exact same result as my TF implementation.\r\n\r\nSo to pass the current official test completely I will have to adapt `beam_search` into TFRag which I will try :)\r\nFor now, I slightly modify test cases to match Pytorch greedy output (we can revert back later when I finish `beam_search` )",
"This is very much WIP -> need some more time for the `save/from_pretrained()`",
"**UPDATED** Dec, 23 2020 : Finish `TFRagSequenceForGeneration` \r\n\r\n---\r\nSorry that I forgot to mention that the latest updated is still in very WIP (phase-2, after phase-1 which is core-part of `TFRagModel` and `TFRagToken` ), so not ready for review .\r\n\r\nAs discussed with Patrick, there is still an \"output_truncation\" issue (perhaps due to Caching & TFBart refactor) that Patrick will help take a look. \r\n\r\n**We will have to finish in Phase-2** (reason of TFRagToken tests fail at the moment ) :\r\n[ x] - bugs in the new test `test_rag_token_inference_nq_checkpoint() `\r\n[ x] - `beam_search` of `TFRagTokenForGeneration` and \r\n[ x ] - `TFRagSequenceForGeneration` <-- finished!\r\n [ ] - Apply jplu comments to clean up the code\r\n\r\nwhich will take some more time :) :) ",
"Hi @jplu , @patrickvonplaten , regarding the graph issue, I just successfully made a[ colab notebook](https://colab.research.google.com/drive/1s-j9PB9yzrFsL6q5rZUQyf8_Lt6jDAkL?usp=sharing) capable of training `TFRagSequenceForGeneration` in graph mode using `context_input_ids` as inputs (instead of `input_ids` ) ... \r\nHopefully this is a reasonable work around on TFRag training.\r\n\r\nNow `TFRagToken` has the same output_truncation [issue as Pytorch 's 9098](https://github.com/huggingface/transformers/pull/9098) . If Patrick help me solve this, I will be able to finish the beam search ... which should finish all the main parts of TFRag ... \r\n\r\n(After this, the remaining is to clean codes as suggested by you guys, complete all fast tests, and fix the ugly hacks [ ie. pass `test_rag_token_inference_nq_checkpoint` without hacking ] )",
"Hey @ratthachat,\r\n\r\ngreat work! I'll take care of the new (and hopefully last :D) `save_/from_pretrained()` issue today. And I'll also make sure that `TFRagTokenGeneration` works properly for `greedy_search`! I'll leave `beam_search` then for you :-) ",
"Okey, I fixed the `from_pretrained` bug when `from_pt=True` and also fixed `greedy search` for `TFRagToken`. Thanks for the very in-detail error descriptions @ratthachat. \r\n\r\nI think the only thing left to do for now is to correctly implement `beam_search` for `TFRagToken`. As I understood it, you'd like to give it a try. Let me know if you want me to tackle this or if you need help :-)\r\n\r\nI extended your \"load from pt\" test slightly to make sure that weights are now always (I really hope so :D) correctly loaded. Also, I added one test for `greedy_search` for `RagToken`",
"Hi @patrickvonplaten , I agree and all T5 related tests are deleted.\r\n\r\nTests related to TFBart are still not passed which likely due to TFBart bug.\r\nMore precisely, TFBart vs. Torch's Bart forward pass return `generator_enc_last_hidden_state.shape`\r\ndifferently if we provide `decoder_input_ids`\r\n\r\n```\r\ntexts = \"My friends are cool but they eat too many carbs. I really want them to be healthy, so I buy them vegetable.\"\r\ntexts2 = \"My friends are cool.\"\r\ninputs = tokenizer([texts], max_length=1024, return_tensors='tf')\r\ninputs2 = tokenizer([texts2], max_length=1024, return_tensors='tf')\r\n\r\ninput_ids=inputs['input_ids']\r\ninput_ids2=inputs2['input_ids']\r\nout = model(input_ids,decoder_input_ids=None)\r\nprint(out.encoder_last_hidden_state.shape) # RETURN (1, 27, 1024)\r\n\r\nout = model(input_ids,decoder_input_ids=input_ids2)\r\nprint(out.encoder_last_hidden_state.shape) # RETURN (1, 7, 1024)\r\n\r\n```\r\n\r\nIf we run the same snippet in Pytorch, they will both return `(1, 27, 1024)` . (tested on both official released and master)\r\nSo likely this is TFBart bug, **and it makes 4 TFRag fast-tests fail**. \r\n(shape of `out.encoder_last_hidden_state.shape` is not as expected)",
"> Hi @patrickvonplaten , I agree and all T5 related tests are deleted.\r\n> \r\n> Tests related to TFBart are still not passed which likely due to TFBart bug.\r\n> More precisely, TFBart vs. Torch's Bart forward pass return `generator_enc_last_hidden_state.shape`\r\n> differently if we provide `decoder_input_ids`\r\n> \r\n> ```\r\n> texts = \"My friends are cool but they eat too many carbs. I really want them to be healthy, so I buy them vegetable.\"\r\n> texts2 = \"My friends are cool.\"\r\n> inputs = tokenizer([texts], max_length=1024, return_tensors='tf')\r\n> inputs2 = tokenizer([texts2], max_length=1024, return_tensors='tf')\r\n> \r\n> input_ids=inputs['input_ids']\r\n> input_ids2=inputs2['input_ids']\r\n> out = model(input_ids,decoder_input_ids=None)\r\n> print(out.encoder_last_hidden_state.shape) # RETURN (1, 27, 1024)\r\n> \r\n> out = model(input_ids,decoder_input_ids=input_ids2)\r\n> print(out.encoder_last_hidden_state.shape) # RETURN (1, 7, 1024)\r\n> ```\r\n> \r\n> If we run the same snippet in Pytorch, they will both return `(1, 27, 1024)` . (tested on both official released and master)\r\n> So likely this is TFBart bug, **and it makes 4 TFRag fast-tests fail**.\r\n> (shape of `out.encoder_last_hidden_state.shape` is not as expected)\r\n\r\nHey @ratthachat, \r\n\r\n> Hi @patrickvonplaten , I agree and all T5 related tests are deleted.\r\n> \r\n> Tests related to TFBart are still not passed which likely due to TFBart bug.\r\n> More precisely, TFBart vs. Torch's Bart forward pass return `generator_enc_last_hidden_state.shape`\r\n> differently if we provide `decoder_input_ids`\r\n> \r\n> ```\r\n> texts = \"My friends are cool but they eat too many carbs. I really want them to be healthy, so I buy them vegetable.\"\r\n> texts2 = \"My friends are cool.\"\r\n> inputs = tokenizer([texts], max_length=1024, return_tensors='tf')\r\n> inputs2 = tokenizer([texts2], max_length=1024, return_tensors='tf')\r\n> \r\n> input_ids=inputs['input_ids']\r\n> input_ids2=inputs2['input_ids']\r\n> out = model(input_ids,decoder_input_ids=None)\r\n> print(out.encoder_last_hidden_state.shape) # RETURN (1, 27, 1024)\r\n> \r\n> out = model(input_ids,decoder_input_ids=input_ids2)\r\n> print(out.encoder_last_hidden_state.shape) # RETURN (1, 7, 1024)\r\n> ```\r\n> \r\n> If we run the same snippet in Pytorch, they will both return `(1, 27, 1024)` . (tested on both official released and master)\r\n> So likely this is TFBart bug, **and it makes 4 TFRag fast-tests fail**.\r\n> (shape of `out.encoder_last_hidden_state.shape` is not as expected)\r\n\r\nHey @ratthachat,\r\n\r\nsorry to answer so late! I don't think that there is a TFBart bug to be honest. When inputting `decoder_input_ids` the behavior you described above is expected and is the same for PyTorch. \r\n\r\nIf you copy & paste the following code on master:\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nfrom transformers import AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM\r\nfrom transformers import AutoTokenizer\r\n\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-base\")\r\ntf_model = TFAutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-base\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-base\")\r\n\r\ninput_str_short = \"this is a string\"\r\ninput_str_long = \"this is a veeeeeeery veeeeeeeeery long string!!!\"\r\n\r\noutput_shape = model(input_ids=tokenizer(input_str_short, return_tensors=\"pt\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"pt\").input_ids)[0].shape\r\n\r\ntf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors=\"tf\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"tf\").input_ids)[0].shape\r\n\r\nassert output_shape == tf_output_shape, \"Output shapes have to be the same\"\r\n```\r\n\r\nyou'll see that no assertion error is thrown. \r\n\r\nI think you might have to adapt a couple of TFRag tests to make it work. There also might be a small chance that you have to rebase your current PR to master because there is a weird version of Bart in this PR (but I doubt that a bit to be honest). \r\n\r\nPlease let me know if you need help for the tests! Think we are almost finished !!! :-) ",
"Hi @patrickvonplaten , sorry that I did not explain clear enough.\r\nThe test I exactly adapted from Pytorch test the shape of `last_hidden_states` not the shape of `logits` . \r\n\r\nIe. please see\r\nhttps://github.com/ratthachat/transformers/blob/tfrag-draft-new/tests/test_modeling_tf_rag.py#L375\r\n\r\n```\r\nself.assertEqual(\r\n outputs.generator_enc_last_hidden_state.shape,\r\n (n_docs * decoder_input_ids.shape[0], self.max_combined_length, config.generator.hidden_size),\r\n )\r\n```\r\n\r\nFrom your example change output from `[0] (logits)` to `[2] (last_hidden_states)` , **we indeed got assertion error**. \r\n\r\n```\r\nfrom transformers import AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM\r\nfrom transformers import AutoTokenizer\r\n\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-base\")\r\ntf_model = TFAutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-base\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-base\")\r\n\r\ninput_str_short = \"this is a string\"\r\ninput_str_long = \"this is a veeeeeeery veeeeeeeeery long string!!!\"\r\n\r\noutput_shape = model(input_ids=tokenizer(input_str_short, return_tensors=\"pt\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"pt\").input_ids)[2].shape\r\n\r\ntf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors=\"tf\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"tf\").input_ids)[2].shape\r\n\r\nassert output_shape == tf_output_shape, \"Output shapes have to be the same\"\r\n```\r\n\r\n```\r\nAssertionError Traceback (most recent call last)\r\n<ipython-input-17-04a80fc987f7> in <module>()\r\n 14 tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors=\"tf\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"tf\").input_ids)[2].shape\r\n 15 \r\n---> 16 assert output_shape == tf_output_shape, \"Output shapes have to be the same\"\r\n\r\nAssertionError: Output shapes have to be the same\r\n```\r\n\r\nBTW, about the rebase, I really want to do it, but I could not solve the merging conflicts .",
"> Hi @patrickvonplaten , sorry that I did not explain clear enough.\r\n> The test I exactly adapted from Pytorch test the shape of `last_hidden_states` not the shape of `logits` .\r\n> \r\n> Ie. please see\r\n> https://github.com/ratthachat/transformers/blob/tfrag-draft-new/tests/test_modeling_tf_rag.py#L375\r\n> \r\n> ```\r\n> self.assertEqual(\r\n> outputs.generator_enc_last_hidden_state.shape,\r\n> (n_docs * decoder_input_ids.shape[0], self.max_combined_length, config.generator.hidden_size),\r\n> )\r\n> ```\r\n> \r\n> From your example change output from `[0] (logits)` to `[2] (last_hidden_states)` , **we indeed got assertion error**.\r\n> \r\n> ```\r\n> from transformers import AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM\r\n> from transformers import AutoTokenizer\r\n> \r\n> model = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-base\")\r\n> tf_model = TFAutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-base\")\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-base\")\r\n> \r\n> input_str_short = \"this is a string\"\r\n> input_str_long = \"this is a veeeeeeery veeeeeeeeery long string!!!\"\r\n> \r\n> output_shape = model(input_ids=tokenizer(input_str_short, return_tensors=\"pt\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"pt\").input_ids)[2].shape\r\n> \r\n> tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors=\"tf\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"tf\").input_ids)[2].shape\r\n> \r\n> assert output_shape == tf_output_shape, \"Output shapes have to be the same\"\r\n> ```\r\n> \r\n> ```\r\n> AssertionError Traceback (most recent call last)\r\n> <ipython-input-17-04a80fc987f7> in <module>()\r\n> 14 tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors=\"tf\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"tf\").input_ids)[2].shape\r\n> 15 \r\n> ---> 16 assert output_shape == tf_output_shape, \"Output shapes have to be the same\"\r\n> \r\n> AssertionError: Output shapes have to be the same\r\n> ```\r\n> \r\n> BTW, about the rebase, I really want to do it, but I could not solve the merging conflicts .\r\n\r\nHey @ratthachat, please note that [2] are the `hidden_states` and not the `last_hidden_state`. The last hidden_state is as expected the same.\r\n\r\n```python\r\nfrom transformers import AutoModel, TFAutoModel\r\nfrom transformers import AutoTokenizer\r\n\r\nmodel = AutoModel.from_pretrained(\"facebook/bart-base\")\r\ntf_model = TFAutoModel.from_pretrained(\"facebook/bart-base\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-base\")\r\n\r\ninput_str_short = \"this is a string\"\r\ninput_str_long = \"this is a veeeeeeery veeeeeeeeery long string!!!\"\r\n\r\noutput_shape = model(input_ids=tokenizer(input_str_short, return_tensors=\"pt\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"pt\").input_ids).last_hidden_state.shape\r\n\r\ntf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors=\"tf\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"tf\").input_ids).last_hidden_state.shape\r\n\r\nassert output_shape == tf_output_shape, \"Output shapes have to be the same\"\r\n```",
"I'm not 100% sure if the merge was completely correct, but let's first focus on making all TFRag tests pass. The other test we can fix later :-) ",
"I apologize @patrickvonplaten . I think I am now a bit confused.\r\nIn TFRag, `model.generator` is created using `TFAutoModelForSeq2SeqLM` (the same as Pytorch).\r\nhttps://github.com/ratthachat/transformers/blob/tfrag-draft-new/src/transformers/models/rag/modeling_tf_rag.py#L519\r\nhttps://github.com/ratthachat/transformers/blob/tfrag-draft-new/src/transformers/models/rag/modeling_rag.py#L356\r\n\r\nIn the above example, you used `TFAutoModel`, so it's not the same.\r\nSo if I changed to `TFAutoModelForSeq2SeqLM` and check `encoder_last_hidden_state.shape` , \r\n**[all fast tests, test this `encoder_last_hidden_state.shape` attribute]**\r\nI still got assertion error. \r\n\r\nI am not sure what's going on and I may miss something simple here. I apologize again.\r\n\r\n```\r\nfrom transformers import AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM\r\nfrom transformers import AutoTokenizer\r\n\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-base\")\r\ntf_model = TFAutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-base\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-base\")\r\n\r\ninput_str_short = \"this is a string\"\r\ninput_str_long = \"this is a veeeeeeery veeeeeeeeery long string!!!\"\r\n\r\noutput_shape = model(input_ids=tokenizer(input_str_short, return_tensors=\"pt\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"pt\").input_ids).encoder_last_hidden_state.shape\r\ntf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors=\"tf\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"tf\").input_ids).encoder_last_hidden_state.shape\r\n\r\nprint(output_shape, tf_output_shape)\r\n\r\nassert output_shape == tf_output_shape, \"Output shapes have to be the same\"\r\n```\r\n\r\n```\r\ntorch.Size([1, 6, 768]) (1, 17, 768)\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<ipython-input-4-9790bfa59614> in <module>()\r\n 16 # print(output.keys(), tf_output.keys())\r\n 17 \r\n---> 18 assert output_shape == tf_output_shape, \"Output shapes have to be the same\"\r\n\r\nAssertionError: Output shapes have to be the same\r\n```",
"Patrick, another issue is that after rebase, there is an error on `load_weight_prefix` which we invented for TFRag's name, so now the basic building block does not work.\r\n\r\n`TypeError: ('Keyword argument not understood:', 'load_weight_prefix') `\r\n",
"> Patrick, another issue is that after rebase, there is an error on `load_weight_prefix` which we invented for TFRag's name, so now the basic building block does not work.\r\n> \r\n> `TypeError: ('Keyword argument not understood:', 'load_weight_prefix') `\r\n\r\nYeah sorry, I made a quick & dirty rebase so there might be errors! It would be awesome if you could fix them (if there are easy to fix)",
"> from transformers import AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM\r\n> from transformers import AutoTokenizer\r\n> \r\n> model = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-base\")\r\n> tf_model = TFAutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-base\")\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-base\")\r\n> \r\n> input_str_short = \"this is a string\"\r\n> input_str_long = \"this is a veeeeeeery veeeeeeeeery long string!!!\"\r\n> \r\n> output_shape = model(input_ids=tokenizer(input_str_short, return_tensors=\"pt\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"pt\").input_ids).encoder_last_hidden_state.shape\r\n> tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors=\"tf\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"tf\").input_ids).encoder_last_hidden_state.shape\r\n> \r\n> print(output_shape, tf_output_shape)\r\n> \r\n> assert output_shape == tf_output_shape, \"Output shapes have to be the same\"\r\n\r\nI see thanks for the very descriptive error description! You're completely right -> that's a bug, great catch! I'll fix",
"> > from transformers import AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM\r\n> > from transformers import AutoTokenizer\r\n> > model = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-base\")\r\n> > tf_model = TFAutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-base\")\r\n> > tokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-base\")\r\n> > input_str_short = \"this is a string\"\r\n> > input_str_long = \"this is a veeeeeeery veeeeeeeeery long string!!!\"\r\n> > output_shape = model(input_ids=tokenizer(input_str_short, return_tensors=\"pt\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"pt\").input_ids).encoder_last_hidden_state.shape\r\n> > tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors=\"tf\").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors=\"tf\").input_ids).encoder_last_hidden_state.shape\r\n> > print(output_shape, tf_output_shape)\r\n> > assert output_shape == tf_output_shape, \"Output shapes have to be the same\"\r\n> \r\n> I see thanks for the very descriptive error description! You're completely right -> that's a bug, great catch! I'll fix\r\n\r\nOk merged it: https://github.com/huggingface/transformers/pull/9944. Could you merge master once again into your PR and see whether the tests work now? :-) \r\n\r\n```bash\r\ngit fetch upstream master\r\ngit merge upstream/master\r\n```\r\n\r\nI don' think there will be any merge conflicts. Lemme know if you need any help :-) ",
"Thanks so much Patrick. Tomorrow, I will try my best to fix the \"load_weights_prefix\" issue and will come back ❤️ ",
"@patrickvonplaten I am sorry - bad news. \r\nEven though now I think all tests should be passed, I could not find a way to fix the `load_weights_prefix` issue above arised after conflict fixing 2 days ago.\r\n\r\nIt seems that this `load_weights_prefix` is sent as `kwarg` to all required functions correctly but failed with Keras not allowing this argument .\r\n(I attach the full `TraceBack` below).\r\n\r\nAt first, I thought that this might be due to the recent TF2.4 upgrade, but I tried downgrade back to TF2.3 and still got the same error. Could you please help take a look?\r\n\r\nTo reproduce, simply initiate the model:\r\n```\r\nfrom transformers import RagTokenizer, RagRetriever \r\nfrom transformers.models.rag.modeling_tf_rag import TFRagModel, TFRagSequenceForGeneration, TFRagTokenForGeneration\r\n\r\nPATH = \"facebook/rag-token-nq\"\r\ntokenizer = RagTokenizer.from_pretrained(PATH)\r\nretriever = RagRetriever.from_pretrained(PATH, index_name=\"exact\", use_dummy_dataset=True) \r\n\r\nmodel = TFRagModel.from_pretrained_question_encoder_generator('facebook/dpr-question_encoder-single-nq-base', \"facebook/bart-base\", generator_from_pt=True, question_encoder_from_pt=True, retriever=retriever) \r\n```\r\n\r\nProduced the following TraceBack:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-10-a201b416721e> in <module>()\r\n 1 \r\n----> 2 model = TFRagModel.from_pretrained_question_encoder_generator('facebook/dpr-question_encoder-single-nq-base', \"facebook/bart-base\", generator_from_pt=True, question_encoder_from_pt=True, retriever=retriever)\r\n\r\n8 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/models/rag/modeling_tf_rag.py in from_pretrained_question_encoder_generator(cls, question_encoder_pretrained_model_name_or_path, generator_pretrained_model_name_or_path, retriever, *model_args, **kwargs)\r\n 366 name=\"generator\",\r\n 367 load_weight_prefix=cls.load_weight_prefix,\r\n--> 368 **kwargs_generator,\r\n 369 )\r\n 370 \r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 1105 if type(config) in TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING.keys():\r\n 1106 return TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING[type(config)].from_pretrained(\r\n-> 1107 pretrained_model_name_or_path, *model_args, config=config, **kwargs\r\n 1108 )\r\n 1109 raise ValueError(\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 1245 \r\n 1246 # Instantiate model.\r\n-> 1247 model = cls(config, *model_args, **model_kwargs)\r\n 1248 \r\n 1249 if from_pt:\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_tf_bart.py in __init__(self, config, load_weight_prefix, *inputs, **kwargs)\r\n 1246 def __init__(self, config, load_weight_prefix=None, *inputs, **kwargs):\r\n 1247 super().__init__(config, *inputs, **kwargs)\r\n-> 1248 self.model = TFBartModel(config, load_weight_prefix=load_weight_prefix, name=\"model\")\r\n 1249 self.use_cache = config.use_cache\r\n 1250 # final_bias_logits is registered as a buffer in pytorch, so not trainable for the the sake of consistency.\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_tf_bart.py in __init__(self, config, *inputs, **kwargs)\r\n 1139 class TFBartModel(TFBartPretrainedModel):\r\n 1140 def __init__(self, config: BartConfig, *inputs, **kwargs):\r\n-> 1141 super().__init__(config, *inputs, **kwargs)\r\n 1142 \r\n 1143 self.model = TFBartMainLayer(config, name=\"model\")\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in __init__(self, config, *inputs, **kwargs)\r\n 629 \r\n 630 def __init__(self, config, *inputs, **kwargs):\r\n--> 631 super().__init__(*inputs, **kwargs)\r\n 632 if not isinstance(config, PretrainedConfig):\r\n 633 raise ValueError(\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)\r\n 455 self._self_setattr_tracking = False # pylint: disable=protected-access\r\n 456 try:\r\n--> 457 result = method(self, *args, **kwargs)\r\n 458 finally:\r\n 459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in __init__(self, *args, **kwargs)\r\n 260 # self.non_trainable_weights\r\n 261 generic_utils.validate_kwargs(kwargs, {'trainable', 'dtype', 'dynamic',\r\n--> 262 'name', 'autocast'})\r\n 263 super(Model, self).__init__(**kwargs)\r\n 264 # By default, Model is a subclass model, which is not in graph network.\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in validate_kwargs(kwargs, allowed_kwargs, error_message)\r\n 776 for kwarg in kwargs:\r\n 777 if kwarg not in allowed_kwargs:\r\n--> 778 raise TypeError(error_message, kwarg)\r\n 779 \r\n 780 \r\n\r\nTypeError: ('Keyword argument not understood:', 'load_weight_prefix')\r\n```",
"> @patrickvonplaten I am sorry - bad news.\r\n> Even though now I think all tests should be passed, I could not find a way to fix the `load_weights_prefix` issue above arised after conflict fixing 2 days ago.\r\n> \r\n> It seems that this `load_weights_prefix` is sent as `kwarg` to all required functions correctly but failed with Keras not allowing this argument .\r\n> (I attach the full `TraceBack` below).\r\n> \r\n> At first, I thought that this might be due to the recent TF2.4 upgrade, but I tried downgrade back to TF2.3 and still got the same error. Could you please help take a look?\r\n> \r\n> To reproduce, simply initiate the model:\r\n> \r\n> ```\r\n> from transformers import RagTokenizer, RagRetriever \r\n> from transformers.models.rag.modeling_tf_rag import TFRagModel, TFRagSequenceForGeneration, TFRagTokenForGeneration\r\n> \r\n> PATH = \"facebook/rag-token-nq\"\r\n> tokenizer = RagTokenizer.from_pretrained(PATH)\r\n> retriever = RagRetriever.from_pretrained(PATH, index_name=\"exact\", use_dummy_dataset=True) \r\n> \r\n> model = TFRagModel.from_pretrained_question_encoder_generator('facebook/dpr-question_encoder-single-nq-base', \"facebook/bart-base\", generator_from_pt=True, question_encoder_from_pt=True, retriever=retriever) \r\n> ```\r\n> \r\n> Produced the following TraceBack:\r\n> \r\n> ```\r\n> ---------------------------------------------------------------------------\r\n> TypeError Traceback (most recent call last)\r\n> <ipython-input-10-a201b416721e> in <module>()\r\n> 1 \r\n> ----> 2 model = TFRagModel.from_pretrained_question_encoder_generator('facebook/dpr-question_encoder-single-nq-base', \"facebook/bart-base\", generator_from_pt=True, question_encoder_from_pt=True, retriever=retriever)\r\n> \r\n> 8 frames\r\n> /usr/local/lib/python3.6/dist-packages/transformers/models/rag/modeling_tf_rag.py in from_pretrained_question_encoder_generator(cls, question_encoder_pretrained_model_name_or_path, generator_pretrained_model_name_or_path, retriever, *model_args, **kwargs)\r\n> 366 name=\"generator\",\r\n> 367 load_weight_prefix=cls.load_weight_prefix,\r\n> --> 368 **kwargs_generator,\r\n> 369 )\r\n> 370 \r\n> \r\n> /usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n> 1105 if type(config) in TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING.keys():\r\n> 1106 return TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING[type(config)].from_pretrained(\r\n> -> 1107 pretrained_model_name_or_path, *model_args, config=config, **kwargs\r\n> 1108 )\r\n> 1109 raise ValueError(\r\n> \r\n> /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n> 1245 \r\n> 1246 # Instantiate model.\r\n> -> 1247 model = cls(config, *model_args, **model_kwargs)\r\n> 1248 \r\n> 1249 if from_pt:\r\n> \r\n> /usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_tf_bart.py in __init__(self, config, load_weight_prefix, *inputs, **kwargs)\r\n> 1246 def __init__(self, config, load_weight_prefix=None, *inputs, **kwargs):\r\n> 1247 super().__init__(config, *inputs, **kwargs)\r\n> -> 1248 self.model = TFBartModel(config, load_weight_prefix=load_weight_prefix, name=\"model\")\r\n> 1249 self.use_cache = config.use_cache\r\n> 1250 # final_bias_logits is registered as a buffer in pytorch, so not trainable for the the sake of consistency.\r\n> \r\n> /usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_tf_bart.py in __init__(self, config, *inputs, **kwargs)\r\n> 1139 class TFBartModel(TFBartPretrainedModel):\r\n> 1140 def __init__(self, config: BartConfig, *inputs, **kwargs):\r\n> -> 1141 super().__init__(config, *inputs, **kwargs)\r\n> 1142 \r\n> 1143 self.model = TFBartMainLayer(config, name=\"model\")\r\n> \r\n> /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in __init__(self, config, *inputs, **kwargs)\r\n> 629 \r\n> 630 def __init__(self, config, *inputs, **kwargs):\r\n> --> 631 super().__init__(*inputs, **kwargs)\r\n> 632 if not isinstance(config, PretrainedConfig):\r\n> 633 raise ValueError(\r\n> \r\n> /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)\r\n> 455 self._self_setattr_tracking = False # pylint: disable=protected-access\r\n> 456 try:\r\n> --> 457 result = method(self, *args, **kwargs)\r\n> 458 finally:\r\n> 459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access\r\n> \r\n> /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in __init__(self, *args, **kwargs)\r\n> 260 # self.non_trainable_weights\r\n> 261 generic_utils.validate_kwargs(kwargs, {'trainable', 'dtype', 'dynamic',\r\n> --> 262 'name', 'autocast'})\r\n> 263 super(Model, self).__init__(**kwargs)\r\n> 264 # By default, Model is a subclass model, which is not in graph network.\r\n> \r\n> /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in validate_kwargs(kwargs, allowed_kwargs, error_message)\r\n> 776 for kwarg in kwargs:\r\n> 777 if kwarg not in allowed_kwargs:\r\n> --> 778 raise TypeError(error_message, kwarg)\r\n> 779 \r\n> 780 \r\n> \r\n> TypeError: ('Keyword argument not understood:', 'load_weight_prefix')\r\n> ```\r\n\r\nShould be fine now - can you check? ",
"@patrickvonplaten all fast and slow tests are now pass 😄 \r\nJust that in one slow test : `test_rag_token_generate_batch` , my colab P100 ran out of memory if we used all 15 inputs.\r\nIf I reduce the size of inputs to 5, the test passed. \r\n(any 5 of 15 will passed indicated that all outputs are correct)",
"@ratthachat - you've really done an amazing job here! The PR looks very nice to me overall. \r\nOne thing, I'd like to change before trying to merge is to delete the `_generate_beam_search` and `_generate_no_beam_search` methods and use the default ones instead. I can definitely help you get this done here. Do you know what the differences are in `_generate_beam_search` that you added to `modeling_tf_rag.py` compared to the one in `modeling_tf_utils.py`? Happy to help you here\r\n\r\nApart from that, I only left a couple of nits.",
"Hi Patrick, thanks for all your super kind helps all these time! 😄 ❤️ \r\nI improved docstrings as suggested.\r\n\r\nAbout` _generate_beam_search` and `_generate_no_beam_search`, actually there are exactly **20 lines differences**.\r\nI made a notebook to show **20 lines differences** -- Version1 (`TFRag`) and Version2 (`generation_tf_utils.py`) \r\nhttps://www.kaggle.com/ratthachat/generate-beam-and-no-searches/\r\n (Please clicking version on the top-right and see `diff` )\r\n\r\nMainly, I simply fix both functions to accept `**kwarg` arguments from `TFRag` (in particular `kwargs[\"encoder_outputs\"]` ).\r\nHowever, I did not directly change/PR `generation_tf_utils.py` for two reasons : \r\n\r\n(1) I am not sure if this changes will affect other TF models or not and I don't have enough resources to check them all\r\n(2) As we once discussed, there will be big 'generation refactor' in `generation_tf_utils.py` like Pytorch soon, and it should be a great chance there to fix & test TFRag at that time together with other TF models.\r\n\r\nWhat do you think ?\r\n\r\n",
"All the slow tests passed for TFBart & TFRag -> PR is ready for review. Started running the whole suite of SLOW tests, just to be sure - will report any unexpected behavior here.\r\n\r\n@LysandreJik @sgugger @jplu - it would be great if you can review the PR.",
"> I am very much not in favor of changing the core modeling utils to fit the needs of one model, so really dislike the change related to `from_pretrained`. I understand the problems of scope, but it seems to me that it's only there to be able to write the convenience init `from_encoder_decoder_pretrained` and the like which is not strictly necessary (one can load the encoder and decoder out of the model then instantiate it by passing the encoder and the decoder, it's just three lines of code instead of one).\r\n> \r\n> I would favor this option for now while we look into different solutions for the weight loading.\r\n\r\nI also think that it's not clean at all to introduce this hack to be able to load the pretrained weights correctly, but there is no way around it really. \r\n\r\nIt is not possible to load the encoder and decoder separately and to pass them into init if one wants to save the model afterward. Even with the current hack this is not possible because there is no way of knowing what the correct scope of the overall model is when the submodel is loaded separately. Try this in the branch of this PR:\r\n\r\n```python\r\nfrom transformers import TFBartForConditionalGeneration, TFDPRQuestionEncoder, TFRagModel, RagRetriever\r\n\r\nretriever = RagRetriever.from_pretrained(\"facebook/rag-sequence-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\n\r\nencoder = TFDPRQuestionEncoder.from_pretrained(\"facebook/dpr-question_encoder-single-nq-base\")\r\ngenerator = TFBartForConditionalGeneration.from_pretrained(\"facebook/bart-large-cnn\")\r\n\r\nrag = TFRagModel(question_encoder=encoder, generator=generator, retriever=retriever)\r\n\r\nrag.save_pretrained(\"rag_temp\")\r\n\r\nnew_rag = TFRagModel.from_pretrained(\"rag_temp\", retriever=retriever) # ERROR => the weights are randomly initialized here\r\n```\r\n\r\nThis means that ```from_encoder_decoder_pretrained``` is not just a convenience function, but actually the only way to correctly load two \"sub\" models into a composite model class. Otherwise the weights are not saved correctly and can then not be loaded again.\r\n\r\nAlso one other thing I want to point out here is that the change/hack added to `TFPretrainedModel`'s `from_pretrained` method in `modeling_tf_utils.py` is not just done for RAG but will make composite TF model classes such as `TFEncoderDecoderModel` possible.\r\nAt the same time, I also think it's a big hack that is not beautiful in any way. \r\n\r\nAt the moment, I don't really see another solution here because we cannot just overwrite the `from_pretrained(...)` method for TFRag since the `load_preifx_weight` is passed to BART's and DPR's `from_pretrained`'s method. What we could do instead however is to add a whole new `from_pretrained_with_prefix(....)` function to `modeling_tf_utils.py` instead of changing the existing method. \r\n\r\nWhat do you think? @sgugger @LysandreJik & @jplu ",
"> At the moment, I don't really see another solution here because we cannot just overwrite the from_pretrained(...) method for TFRag since the load_preifx_weight is passed to BART's and DPR's from_pretrained's method. What we could do instead however is to add a whole new from_pretrained_with_prefix(....) function to modeling_tf_utils.py instead of changing the existing method.\r\n\r\nFrom a short term view, I don't see another solution either. We clearly have an issue with the weights naming for Seq2Seq models and the found workaround we had until now (the `tf.compat.v1...`) reaches its limits with RAG as it requires changes where it should not.\r\n\r\nFor me this has to be rethought because I think that all the convert issues have to be handled in the `modeling_tf_pytorch_utils.py` script and not elsewhere, and we should stop to force TF to have the same names than PT but more handle how to convert \"proper\" TF weight names to \"proper\" PT weight names (and the other way around). I think that if we continue without having a better design for this we go to a more complex implementation and then understanding. We should also keep in mind that the day TF removes the V1 compatibility, none of these models will work as expected. Hence, I would accept this as a temporary solution, but we clearly need to review this TF naming part thoroughly.",
"As discussed a bit offline, @LysandreJik will take a final review and then we'll merge the approach of this PR.\r\n\r\nWe should integrate the suggestion from @sgugger which is to change `prefix` to a private function arg `_prefix` to keep the design flexible for future changes.\r\n\r\nOnce @LysandreJik has done the review, I'll do a last refactor & then we can merge I think @ratthachat :-)",
"Thanks so much everyone for the merge! Especially @jplu who gave insightful comments on several earlier versions and @patrickvonplaten who has collaborated and has greatly helped in all aspects of this works!! \r\n(Honestly it's not possible without Patrick's help)\r\n\r\nAbout the single mismatched generated answer (\"step by step\" vs. \"evolution\"), I will investigate this point further. Strangely, in earlier versions all tests are passed meaning all outputs are equivalent."
] | 1,607 | 1,615 | 1,615 | CONTRIBUTOR | null | # What does this PR do?
This is a reopen PR of TFRag Draft version (https://github.com/huggingface/transformers/pull/8892 )
which somehow seems broken and not accessible at the moment .
## Things done
- `TFRagModel`
- `TFRagSequenceForGeneration`
- `TFRagTokenForGeneration`
- beam_search generation
- "Work-around" example in graph mode training (The full graph-mode training need of `.numpy()` for retriever calling, and this doesn't work on graph mode) --> using `context_input_ids` in place of `input_ids`
- Complete test on `TFRag`
## Things not yet done ...
- Integrate with T5 as generator <-- In the next PR
## Who can review?
@jplu @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9002/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9002/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9002",
"html_url": "https://github.com/huggingface/transformers/pull/9002",
"diff_url": "https://github.com/huggingface/transformers/pull/9002.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9002.patch",
"merged_at": 1615240191000
} |
https://api.github.com/repos/huggingface/transformers/issues/9001 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9001/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9001/comments | https://api.github.com/repos/huggingface/transformers/issues/9001/events | https://github.com/huggingface/transformers/issues/9001 | 760,109,317 | MDU6SXNzdWU3NjAxMDkzMTc= | 9,001 | 🌟 CTRLsum | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"I ported this model for easy use in Hugging Face Transformers. Try using the code below!\r\n\r\n### 1. Create models and tokenizers\r\n```python\r\n>> from transformers import AutoModelForSeq2SeqLM, PreTrainedTokenizerFast\r\n\r\n>>> model = AutoModelForSeq2SeqLM.from_pretrained(\"hyunwoongko/ctrlsum-cnndm\")\r\n>>> # model = AutoModelForSeq2SeqLM.from_pretrained(\"hyunwoongko/ctrlsum-arxiv\")\r\n>>> # model = AutoModelForSeq2SeqLM.from_pretrained(\"hyunwoongko/ctrlsum-bigpatent\")\r\n\r\n>>> tokenizer = PreTrainedTokenizerFast.from_pretrained(\"hyunwoongko/ctrlsum-cnndm\")\r\n>>> # tokenizer = PreTrainedTokenizerFast.from_pretrained(\"hyunwoongko/ctrlsum-arxiv\")\r\n>>> # tokenizer = PreTrainedTokenizerFast.from_pretrained(\"hyunwoongko/ctrlsum-bigpatent\")\r\n```\r\n\r\n### 2. Unconditioned summarization\r\n```python\r\n>>> data = tokenizer(\"My name is Kevin. I love dogs. I loved dogs from 1996. Today, I'm going to walk on street with my dogs\", return_tensors=\"pt\")\r\n>>> input_ids, attention_mask = data[\"input_ids\"], data[\"attention_mask\"]\r\n>>> tokenizer.batch_decode(model.generate(input_ids, attention_mask=attention_mask, num_beams=5))[0]\r\n'</s>My name is Kevin. I loved dogs from 1996.</s>'\r\n```\r\n### 3. Conditioned summarization\r\n- You can input condition token using `TOKEN => CONTENTS` structure\r\n```python\r\n>>> data = tokenizer(\"today plan => My name is Kevin. I love dogs. I loved dogs from 1996. Today, I'm going to walk on street with my dogs\", return_tensors=\"pt\")\r\n>>> input_ids, attention_mask = data[\"input_ids\"], data[\"attention_mask\"]\r\n>>> tokenizer.batch_decode(model.generate(input_ids, attention_mask=attention_mask, num_beams=5))[0]\r\n\"</s> Today, I'm going to walk on street with my dogs. I loved dogs from 1996</s>\"\r\n```\r\n\r\n### 4. Prompt summarization\r\n- You can also input `decoder_input_ids` for input prompt.\r\n```python\r\n>>> data = tokenizer(\"Q:What is my name? A: => My name is Kevin. I love dogs. I loved dogs from 1996. Today, I'm going to walk on street with my dogs\", return_tensors=\"pt\")\r\n>>> input_ids, attention_mask = data[\"input_ids\"], data[\"attention_mask\"]\r\n>>> tokenizer.batch_decode(model.generate(input_ids, attention_mask=attention_mask, num_beams=5, decoder_input_ids=tokenizer(\"Q:What is My name? A:\", return_tensors=\"pt\")[\"input_ids\"][:, :-1]))[0]\r\n'<s>Q:What is My name? A: Kevin.</s>'\r\n```"
] | 1,607 | 1,616 | null | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
>Current summarization systems yield generic summaries that are disconnected from users’ preferences and expectations. To address this limitation, we present **CTRLsum**, a novel framework for controllable summarization.
>
> Our approach enables users to control multiple aspects of generated summaries by interacting with the summarization system through textual input in the form of a set of keywords or descriptive prompts.
Using a single unified model, CTRLsum is able to achieve a broad scope of summary manipulation at inference time without requiring additional human annotations or pre-defining a set of control aspects during training.
We quantitatively demonstrate the effectiveness of our approach on three domains of summarization datasets and five control aspects:
> 1) entity-centric
> 2) length-controllable summarization
> 3) contribution summarization on scientific papers
> 4) invention purpose summarization on patent filings
> 5) question-guided summarization on news articles in a reading comprehension setting
>
> Moreover, when used in a standard, uncontrolled summarization setting, CTRLsum achieves state-of-the-art results on the CNN/DailyMail dataset.
## Open source status
* [x] the model implementation is available: https://github.com/salesforce/ctrl-sum
* [x] the model weights are available: _Download link available in the README of the repo_
* [x] who are the authors: @jxhe @muggin
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9001/reactions",
"total_count": 8,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/9001/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9000 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9000/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9000/comments | https://api.github.com/repos/huggingface/transformers/issues/9000/events | https://github.com/huggingface/transformers/issues/9000 | 759,993,382 | MDU6SXNzdWU3NTk5OTMzODI= | 9,000 | ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds | {
"login": "rk0033",
"id": 41638372,
"node_id": "MDQ6VXNlcjQxNjM4Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/41638372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rk0033",
"html_url": "https://github.com/rk0033",
"followers_url": "https://api.github.com/users/rk0033/followers",
"following_url": "https://api.github.com/users/rk0033/following{/other_user}",
"gists_url": "https://api.github.com/users/rk0033/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rk0033/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rk0033/subscriptions",
"organizations_url": "https://api.github.com/users/rk0033/orgs",
"repos_url": "https://api.github.com/users/rk0033/repos",
"events_url": "https://api.github.com/users/rk0033/events{/privacy}",
"received_events_url": "https://api.github.com/users/rk0033/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"analogs to #8923",
"this one is probably safe to close - probably your link was satisfactory, @patrickvonplaten "
] | 1,607 | 1,612 | 1,612 | NONE | null | ## Environment info
- `transformers` version:4.0.0
- Platform:Google Colab
- Python version:3
- Tensorflow version (GPU?):2.3.0
- Using GPU in script?:No
### Who can help
@patrickvonplaten
@patil-suraj
@jplu
## Information
I refer to the URL below and want to run the fine-tuning on mT5.
https://huggingface.co/transformers/training.html
https://huggingface.co/transformers/model_doc/mt5.html
Model I am using (mT5):
```
from transformers import MT5Model, T5Tokenizer, TFMT5Model
model = TFMT5Model.from_pretrained("google/mt5-base")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-base")
```
```
from transformers import BertTokenizer, glue_convert_examples_to_features
import tensorflow as tf
import tensorflow_datasets as tfds
data = tfds.load('glue/mrpc')
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
```
```
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss)
model.fit(train_dataset, epochs=2, steps_per_epoch=115)
```
the output produced :
```
ValueError Traceback (most recent call last)
<ipython-input-9-650f77977ac3> in <module>()
3 loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
4 model.compile(optimizer=optimizer, loss=loss)
----> 5 model.fit(train_dataset, epochs=2, steps_per_epoch=115)
6 # model.fit({"inputs": train_dataset},epochs=2, steps_per_epoch=115)
7 # model.fit(train_dataset)
10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/transformers/models/t5/modeling_tf_t5.py:1094 call *
decoder_outputs = self.decoder(
/usr/local/lib/python3.6/dist-packages/transformers/models/t5/modeling_tf_t5.py:642 call *
raise ValueError(f"You have to specify either {err_msg_prefix}inputs or {err_msg_prefix}inputs_embeds")
ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9000/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8999 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8999/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8999/comments | https://api.github.com/repos/huggingface/transformers/issues/8999/events | https://github.com/huggingface/transformers/issues/8999 | 759,986,489 | MDU6SXNzdWU3NTk5ODY0ODk= | 8,999 | AlbertTokenizer handles special tokens incorrectly | {
"login": "szhengac",
"id": 3960020,
"node_id": "MDQ6VXNlcjM5NjAwMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3960020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szhengac",
"html_url": "https://github.com/szhengac",
"followers_url": "https://api.github.com/users/szhengac/followers",
"following_url": "https://api.github.com/users/szhengac/following{/other_user}",
"gists_url": "https://api.github.com/users/szhengac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/szhengac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szhengac/subscriptions",
"organizations_url": "https://api.github.com/users/szhengac/orgs",
"repos_url": "https://api.github.com/users/szhengac/repos",
"events_url": "https://api.github.com/users/szhengac/events{/privacy}",
"received_events_url": "https://api.github.com/users/szhengac/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
},
{
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
}
] | [
"I'm taking a look at why this is, it seems that we have a differing behavior between `from_pretrained` and the initialization method. I tried loading that `spiece.model` directly with `from_pretrained` and it behaves normally, and so does pointing to a directory containing solely that file.\r\n\r\nI'm taking a look and will come back to you.",
"@LysandreJik the problem is that `unique_no_split_tokens` is not initialised when you create a tokenizer from `__init__`.\r\n\r\nSee: https://stackoverflow.com/questions/64631665/what-is-the-difference-in-robertatokenizer-and-from-pretrained-way-of-initia/64640570#64640570",
"Indeed, thanks for investigating @cronoik. Do you want to open a PR with a fix?",
"Yes, I can but I would like to discuss this before because it affects the core of the library and all tokenizers (I have only checked the slow tokenizers yet, but it probably applies to the fast tokenizers as well.). \r\n\r\nWhen a user calls `.from_pretrained`, the tokenizer is created with `__init__` in the `._from_pretrained` method of the `PreTrainedTokenizerBase` class ([line 1868](https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/tokenization_utils_base.py#L1868)). The problem is now, that `._from_pretrained` does some magic from line [1881](https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/tokenization_utils_base.py#L1881) to [1909](https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/tokenization_utils_base.py#L1909), that is not executed when you create the tokenizer from `__init__` directly. \r\n\r\nSo, simply said all I need to do is to move this magic to the `__init__` method and remove it from the `._from_pretrained`?\r\n\r\n@LysandreJik ",
"Pinging @thomwolf for advice on tokenizers loading methods.",
"This issue has been stale for 1 month.",
"@LysandreJik Maybe the [documentation](https://huggingface.co/transformers/main_classes/tokenizer.html) should be updated to at least tell the people that the recommended way to initialize a tokenizer is `from_pretrained` and that is not guaranteed that `__init__` will work properly?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"It is closed without solving the issue?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Pinging @SaulLu here, as I also encountered this.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,645 | null | NONE | null | If I download the pretrained vocab `https://huggingface.co/albert-base-v1/resolve/main/spiece.model` to the local file system and use the following snippet, the tokenizer does not handle the special tokens properly:
```
tokenizer = AlbertTokenizer('spiece.model')
tokenizer.tokenize('[CLS] Hello World ! [SEP]')
['▁[', 'cl', 's', ']', '▁hello', '▁world', '▁', '!', '▁[', 's', 'ep', ']']
```
If I use `from_pretrained` to load the vocab, it works well:
```
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')
tokenizer.tokenize('[CLS] Hello World ! [SEP]')
['[CLS]', '▁hello', '▁world', '▁', '!', '[SEP]']
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8999/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8998 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8998/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8998/comments | https://api.github.com/repos/huggingface/transformers/issues/8998/events | https://github.com/huggingface/transformers/issues/8998 | 759,964,112 | MDU6SXNzdWU3NTk5NjQxMTI= | 8,998 | Marge - Pre-training via Paraphrasing | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"model weights available?",
"No. :( "
] | 1,607 | 1,608 | null | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8998/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8997 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8997/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8997/comments | https://api.github.com/repos/huggingface/transformers/issues/8997/events | https://github.com/huggingface/transformers/pull/8997 | 759,896,786 | MDExOlB1bGxSZXF1ZXN0NTM0ODE5NzM4 | 8,997 | [wip] [ci] doc-job-skip take #4.5 dry-run via github direct edit | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,651 | 1,607 | CONTRIBUTOR | null | This is take 4.5 on attempting to find a reliable way to get a list of modified files of this PR. It's identical to https://github.com/huggingface/transformers/pull/8980 but this PR was created from github UI direct file edit, so as we can see, it doesn't provide `CIRCLE_PR_NUMBER` - Nothing bad happens, but the check can't be done since we have no information to work with :(
It also happens with PR's made from a non-personal branch, https://github.com/huggingface/transformers/pull/9015
And the result is that the check is completely skipped as it has no data to work with:
https://app.circleci.com/pipelines/github/huggingface/transformers/17118/workflows/48285d78-cb04-4feb-87f8-77cb02ac2593/jobs/134493
Hoping that circlePR will fix that bug.
This PR:
* [x] tests a PR submission from non-personal forked repo
* [x] switches to `head.user.login` for the username to checkout the branch with - using PR username as it's in the master will not work if the branch is coming from a non-forked repo (original that is). (could also use `.head.repo.full_name` for the whole thing at once.)
For now I will let this PR sit for a while and add other fixes if we find more edge cases.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8997/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8997",
"html_url": "https://github.com/huggingface/transformers/pull/8997",
"diff_url": "https://github.com/huggingface/transformers/pull/8997.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8997.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8996 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8996/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8996/comments | https://api.github.com/repos/huggingface/transformers/issues/8996/events | https://github.com/huggingface/transformers/pull/8996 | 759,887,430 | MDExOlB1bGxSZXF1ZXN0NTM0ODEyMjMy | 8,996 | Remove use of deprecated method in Trainer HP search | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
Somehow this one slip in the cracks and was forgotten when we removed old deprecated method. This might warrant a release patch if we don't do a new release soon.
Fixes #8995 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8996/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8996",
"html_url": "https://github.com/huggingface/transformers/pull/8996",
"diff_url": "https://github.com/huggingface/transformers/pull/8996.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8996.patch",
"merged_at": 1607523222000
} |
https://api.github.com/repos/huggingface/transformers/issues/8995 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8995/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8995/comments | https://api.github.com/repos/huggingface/transformers/issues/8995/events | https://github.com/huggingface/transformers/issues/8995 | 759,873,478 | MDU6SXNzdWU3NTk4NzM0Nzg= | 8,995 | AttributeError: 'Trainer' object has no attribute 'is_world_master' | {
"login": "blagav",
"id": 52176045,
"node_id": "MDQ6VXNlcjUyMTc2MDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/52176045?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blagav",
"html_url": "https://github.com/blagav",
"followers_url": "https://api.github.com/users/blagav/followers",
"following_url": "https://api.github.com/users/blagav/following{/other_user}",
"gists_url": "https://api.github.com/users/blagav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blagav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blagav/subscriptions",
"organizations_url": "https://api.github.com/users/blagav/orgs",
"repos_url": "https://api.github.com/users/blagav/repos",
"events_url": "https://api.github.com/users/blagav/events{/privacy}",
"received_events_url": "https://api.github.com/users/blagav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"you may replace is_world_master by is_world_process_zero.",
"Any solutions to this? I am not able to use ray tune backend in any way",
"If you get this error still it's probably because you're using mismatched library versions/example versions. Using the latest examples or a more recent version (v4.1.x) should patch this.\r\n\r\nIf it doesn't, then please open a new issue and fill in the issue template so that we may help you. Thank you."
] | 1,607 | 1,611 | 1,607 | NONE | null | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux-4.15.0-96-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.8
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
- Ray version: 1.0.1.post1
### Who can help
@sgugger — Would you be able to offer any insight?
## Information
Model I am using (Bert, XLNet ...): BertForSequenceClassification
The problem arises when using:
* [ ] the official example scripts:
* [ x] my own modified scripts:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [ x] my own task or dataset:
## To reproduce
The error occurs when I try to run a hyperparameter search for my finetuning step using Ray Tune. I am able to successfully finetune the BertForSequenceClassification model normally — the error only arises when running hyperparameter search.
```
config = BertConfig.from_pretrained(pretrained_model_path, num_labels=num_labels, finetuning_task ='text-classification')
def model_init():
return BertForSequenceClassification.from_pretrained(pretrained_model_path, config=config)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
model_init=model_init
)
trainer.hyperparameter_search(
direction="minimize",
backend="ray",
n_trials=20,
keep_checkpoints_num=1,
resources_per_trial = {'gpu':1, 'cpu':1}
)
```
## Expected behavior
I am trying to run Ray Tune from Huggingface as per these instructions: https://huggingface.co/blog/ray-tune
If anyone has any insight as to what could be causing this error, it would be greatly appreciated, thank you!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8995/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8995/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8994 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8994/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8994/comments | https://api.github.com/repos/huggingface/transformers/issues/8994/events | https://github.com/huggingface/transformers/issues/8994 | 759,867,986 | MDU6SXNzdWU3NTk4Njc5ODY= | 8,994 | DistilBert PyTorch to TensorFlow conversion - input sequence length is max 5 tokens for tensorflow | {
"login": "vikul-gupta",
"id": 20052378,
"node_id": "MDQ6VXNlcjIwMDUyMzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/20052378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikul-gupta",
"html_url": "https://github.com/vikul-gupta",
"followers_url": "https://api.github.com/users/vikul-gupta/followers",
"following_url": "https://api.github.com/users/vikul-gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/vikul-gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikul-gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikul-gupta/subscriptions",
"organizations_url": "https://api.github.com/users/vikul-gupta/orgs",
"repos_url": "https://api.github.com/users/vikul-gupta/repos",
"events_url": "https://api.github.com/users/vikul-gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikul-gupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nFor now, as the TF models are implemented right now, this is the normal behavior, if you want to have a different size you have to set it manually yourself before to create the saved model. It is planned to make the size of the sequence length variable when creating a saved model but we don't know when. If you don't know how to do it, I can show you how :)",
"Hi Julien! Thanks for your response. Could you please show me how to manually set that?",
"To do that you can run the following lines:\r\n```\r\nfrom transformers import TFDistilBertModel, DistilBertTokenizer\r\ntf_model = TFDistilBertModel.from_pretrained(model_name, from_pt=True)\r\ntokenizer = DistilBertTokenizer.from_pretrained(model_name)\r\ninputs = tokenizer(\"My test sentence\", padding=\"max_length\", max_length=128, return_tensors=\"tf\")\r\ntf_model._saved_model_inputs_spec = None\r\nmodel._set_save_spec(inputs)\r\ntf.saved_model.save(tf_model, path)\r\n```",
"Thanks!"
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-5.4.0-1028-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
@jplu
## Information
Model I am using (Bert, XLNet ...): DistilBert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I load the PyTorch model as a TensorFlow model and then save it to the TensorFlow SavedModel format:
`tf_model = TFDistilBertModel.from_pretrained(model_name, from_pt=True)
tf.saved_model.save(tf_model, path)`
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I am attempting to take a Sentence Transformer (ST) model I trained in PyTorch and use it in TensorFlow.js. The code above is specifically converting the DistilBert model ST uses to a TensorFlow SavedModel format. Then, I load the SavedModel format into a TFJS Graph Model format and write the pooling layer. When implementing a forward pass in JS, I noticed that the input sequence length must be 5 tokens (instead of 128). I checked the SavedModel format (.pb file) to rule out an issue from TF to TFJS and noticed that the shapes all have 5 where 128 should be.
## To reproduce
Steps to reproduce the behavior:
Run the above code on the DistilBert model we trained (not very reproducible). These are the contents of the folder:
* config.json
* pytorch_model.bin
* sentence_bert_config.json (this file has the max_seq_length=128 parameter - I tried adding it to config.json, but it doesn't work)
* special_tokens_map.json
* tokenizer_config.json
* vocab.txt
This is the output of the script.
2020-12-08 23:32:19.317202: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-12-08 23:32:20.677265: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-12-08 23:32:20.677408: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.678158: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:00:1e.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2020-12-08 23:32:20.678182: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-12-08 23:32:20.680072: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-12-08 23:32:20.681791: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-12-08 23:32:20.682130: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-12-08 23:32:20.684002: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-12-08 23:32:20.685065: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-12-08 23:32:20.688448: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-12-08 23:32:20.688575: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.689406: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.690132: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-12-08 23:32:20.690351: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-12-08 23:32:20.713215: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2499995000 Hz
2020-12-08 23:32:20.713451: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5572187ea750 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-12-08 23:32:20.713477: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-12-08 23:32:20.878776: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.879646: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557218b0f200 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-12-08 23:32:20.879675: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5
2020-12-08 23:32:20.879895: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.880628: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:00:1e.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2020-12-08 23:32:20.880665: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-12-08 23:32:20.880697: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-12-08 23:32:20.880712: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-12-08 23:32:20.880726: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-12-08 23:32:20.880744: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-12-08 23:32:20.880761: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-12-08 23:32:20.880780: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-12-08 23:32:20.880853: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.881644: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.882347: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-12-08 23:32:20.882390: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-12-08 23:32:21.437267: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-12-08 23:32:21.437314: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-12-08 23:32:21.437328: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-12-08 23:32:21.437554: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:21.438350: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:21.439124: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 12367 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:1e.0, compute capability: 7.5)
2020-12-08 23:32:21.678399: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2020-12-08 23:32:21.874045: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
All PyTorch model weights were used when initializing TFDistilBertModel.
All the weights of TFDistilBertModel were initialized from the PyTorch model.
If your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFDistilBertModel for predictions without further training.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7fafd1a97fd0>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7fafd0056e10>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7fafc051ea10>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7fafc0535690>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7fafc04cb3d0>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7fafc04d9f50>, because it is not built.
WARNING:tensorflow:From /home/ubuntu/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /home/ubuntu/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
## Expected behavior
The TensorFlow model should have a maximum input sequence length of 128, not 5. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8994/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8993 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8993/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8993/comments | https://api.github.com/repos/huggingface/transformers/issues/8993/events | https://github.com/huggingface/transformers/pull/8993 | 759,844,228 | MDExOlB1bGxSZXF1ZXN0NTM0Nzc2ODgw | 8,993 | Templates overhaul 1 | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | MEMBER | null | Re-opening of https://github.com/huggingface/transformers/pull/8981 after history was messed up. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8993/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8993",
"html_url": "https://github.com/huggingface/transformers/pull/8993",
"diff_url": "https://github.com/huggingface/transformers/pull/8993.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8993.patch",
"merged_at": 1607468407000
} |
https://api.github.com/repos/huggingface/transformers/issues/8992 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8992/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8992/comments | https://api.github.com/repos/huggingface/transformers/issues/8992/events | https://github.com/huggingface/transformers/pull/8992 | 759,706,223 | MDExOlB1bGxSZXF1ZXN0NTM0NjY0MDU1 | 8,992 | New squad example | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | Reopening from #8924 since the rebase gave too big a diff. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8992/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8992",
"html_url": "https://github.com/huggingface/transformers/pull/8992",
"diff_url": "https://github.com/huggingface/transformers/pull/8992.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8992.patch",
"merged_at": 1607456370000
} |
https://api.github.com/repos/huggingface/transformers/issues/8991 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8991/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8991/comments | https://api.github.com/repos/huggingface/transformers/issues/8991/events | https://github.com/huggingface/transformers/pull/8991 | 759,632,443 | MDExOlB1bGxSZXF1ZXN0NTM0NjA0MDI5 | 8,991 | fixes #8968 | {
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The failed circleci test is not related to my PR. :)",
"Hi @cronoik! There's been a mistake done yesterday, and the history of your branch was messed up by mistake (see the file changes, +/-). Do you mind closing this PR and opening another one? No need to do anything on the branch, just closing this one and opening a new one should be enough. Thank you!"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
One of the 3.X releases introduce output objects that replaced the previously returned tuples. This PR updates the transformers notebook to reflect that update.
Fixes #8968
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8991/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8991",
"html_url": "https://github.com/huggingface/transformers/pull/8991",
"diff_url": "https://github.com/huggingface/transformers/pull/8991.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8991.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8990 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8990/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8990/comments | https://api.github.com/repos/huggingface/transformers/issues/8990/events | https://github.com/huggingface/transformers/pull/8990 | 759,594,253 | MDExOlB1bGxSZXF1ZXN0NTM0NTcyNjE4 | 8,990 | [Flax] Serialization, Design changes | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think I'll take a Flax 101 class before reviewing this PR.",
"Requires more discussion"
] | 1,607 | 1,607 | 1,607 | MEMBER | null | This PR is a proposition for some changes of the flax design. @mfuntowicz - I'm trying to get a better understanding of how to best use the flax library with Transformers' philosophy. Would be super happy about some comments from your side :-)
1) I think Flax's `from_pretrained()` should default to the flax serialization and not the PyTorch one. Flax's serialization didn't work previously and the model was loaded from PyTorch by default. This PR changes this to Flax and makes `from_pretrained()` and `save_pretrained()` work. I uploaded Bert's and Roberta's flax model weights to the model hub (I noticed that I accidently overwrote an existing Flax `bert-base-cased` - hope that was fine @mfuntowicz - it's doesn't break anything on master since PT was loaded by default)
2) Not sure why we have the `model_class` class attribute in Flax - I don't think we need it no? @mfuntowicz - It would be nice to avoid it for simplicity IMO.
3) I added a `FlaxBertPretrainedModel` class, just as it's done for PyTorch. IMO, ideally we should stay as close as possible to the design of PyTorch. Not sure at all if something like this could work:
```python
class FlaxBertForMaskedLM(FlaxBertPretrainedModel):
def __init__(self, config, state, seed, **kwargs):
self.bert = FlaxBertModel(config, state[self.base_model_prefix], seed, **kwargs) # pass bert relevant
@nn.compact
def __call__(....):
last_hidden_states = self.bert(hidden_states)[0]
logits = FlaxBertLMPredictionHead(vocab_size=self.vocab_size, name="mlm", dtype=self.dtype)(last_hidden_states)
```
=> What do you think @mfuntowicz ?
Would be awesome if we could do some flax library design discussions here @mfuntowicz @LysandreJik @sgugger @thomwolf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8990/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8990",
"html_url": "https://github.com/huggingface/transformers/pull/8990",
"diff_url": "https://github.com/huggingface/transformers/pull/8990.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8990.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8989 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8989/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8989/comments | https://api.github.com/repos/huggingface/transformers/issues/8989/events | https://github.com/huggingface/transformers/pull/8989 | 759,586,776 | MDExOlB1bGxSZXF1ZXN0NTM0NTY2NDE2 | 8,989 | Make `ModelOutput` pickle-able | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
To be pickle-able or deep-copyable, `ModelOutput`s need to have all fields with a default. This was already the case on the TF side, just doing it on the PT side as well.
Fixes #8978 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8989/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8989",
"html_url": "https://github.com/huggingface/transformers/pull/8989",
"diff_url": "https://github.com/huggingface/transformers/pull/8989.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8989.patch",
"merged_at": 1607446780000
} |
https://api.github.com/repos/huggingface/transformers/issues/8988 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8988/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8988/comments | https://api.github.com/repos/huggingface/transformers/issues/8988/events | https://github.com/huggingface/transformers/pull/8988 | 759,581,512 | MDExOlB1bGxSZXF1ZXN0NTM0NTYyMTA3 | 8,988 | [WIP] Add Tapas (bis) | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Closing this PR and opening a new one on the same branch due to Github issues."
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | This is a clean branch based on #8113 which is up-to-date with master. To do:
- [x] Make sure all tests pass (currently 44 passed, 4 skipped for `test_modeling_tapas.py`) cc @LysandreJik
- [x] `make style` & `make quality`
- [x] Investigating forward/backward pass => weird issue with fine-tuning already-finetuned WTQ checkpoint, I guess people should just not do it
- [x] Add notebooks to show how to use:
- `tapas-base-finetuned-sqa`: https://colab.research.google.com/drive/1zMW-D2kYrpDA-cvpNJ-ctGD-tDXWebZa?usp=sharing
- `tapas-base-finetuned-tabfact`: https://colab.research.google.com/drive/1Ug6gzPFgf3J0dR-0f4spt0eyPS10dD1l?usp=sharing
Once they all pass, I'll start uploading more checkpoints to the model hub.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8988/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8988/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8988",
"html_url": "https://github.com/huggingface/transformers/pull/8988",
"diff_url": "https://github.com/huggingface/transformers/pull/8988.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8988.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8987 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8987/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8987/comments | https://api.github.com/repos/huggingface/transformers/issues/8987/events | https://github.com/huggingface/transformers/pull/8987 | 759,567,692 | MDExOlB1bGxSZXF1ZXN0NTM0NTUxMzMw | 8,987 | Tensor arrays | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,686 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
This PR turns the `all_attentions` and `all_hidden_states` values into tensors instead of a tuple. This update is to properly allow the dict outputs in TF serving, because the value of each key cannot be something else than a TF tensor. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8987/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8987",
"html_url": "https://github.com/huggingface/transformers/pull/8987",
"diff_url": "https://github.com/huggingface/transformers/pull/8987.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8987.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8986 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8986/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8986/comments | https://api.github.com/repos/huggingface/transformers/issues/8986/events | https://github.com/huggingface/transformers/pull/8986 | 759,557,062 | MDExOlB1bGxSZXF1ZXN0NTM0NTQyNTEx | 8,986 | Checking output format + check raises ValueError | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | Just making sure we're not changing the format when we apply `function_to_apply` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8986/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8986",
"html_url": "https://github.com/huggingface/transformers/pull/8986",
"diff_url": "https://github.com/huggingface/transformers/pull/8986.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8986.patch",
"merged_at": 1607448358000
} |
https://api.github.com/repos/huggingface/transformers/issues/8985 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8985/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8985/comments | https://api.github.com/repos/huggingface/transformers/issues/8985/events | https://github.com/huggingface/transformers/pull/8985 | 759,427,435 | MDExOlB1bGxSZXF1ZXN0NTM0NDM1NDQy | 8,985 | Remove value error | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you elaborate on the use case? It seems dangerous and magical to me. When passing a parameter to a function that is not in the signature, the user gets a `ValueError`.",
"Sure! an EagerTensor doesn't have a `.name` attribute so we assume for that case that the values are given in the parameters order. That's ok because we don't have the choice, but why not having the same behavior in case someone decides to name the tensors as he wishs.\r\n\r\nThis is very picky, and I won't fight at all if not accepted ahah",
"Mmm, but in this test we're not eager tensors since there is a `.name` attribute, or am I missing something?",
"While I was trying to explain this, a use case came to my mind, and indeed this behavior is not correct for an edge use case:\r\n```\r\nfrom transformers import AutoTokenizer, TFBertForSequenceClassification, BertConfig\r\nimport tensorflow as tf\r\nimport datasets\r\n\r\nconfig = BertConfig.from_pretrained(\"bert-base-cased\", num_labels=6)\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\nds = datasets.load_dataset('emotion')\r\nencoded_train = ds['train'].map(lambda examples: tokenizer(examples['text'], truncation=True, padding='max_length', max_length=128), batched = True)\r\nencoded_train.set_format(type='tensorflow', columns=['input_ids', 'attention_mask', 'label'])\r\nfeatures_train = {x: encoded_train[x].to_tensor(default_value=0, shape=[None, 128]) for x in ['input_ids', 'attention_mask']}\r\ntrain_ds = tf.data.Dataset.from_tensor_slices((features_train, encoded_train[\"label\"])).batch(16)\r\ninput_ids = tf.keras.Input(shape=(128,), dtype='int32', name=\"input_ids\")\r\nattention_mask = tf.keras.Input(shape=(128, ), dtype='int32', name=\"attention_mask\")\r\ntransformer = TFBertForSequenceClassification.from_pretrained(\"bert-base-cased\", num_labels=6)\r\nencoded = transformer([input_ids, attention_mask])\r\nlogits = encoded[0]\r\nmodel = tf.keras.models.Model(inputs = [input_ids, attention_mask], outputs = logits)\r\n\r\nmodel.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0), \r\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), \r\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy('accuracy')])\r\nmodel.fit(train_ds, epochs=1, steps_per_epoch=1)\r\n```\r\nWe get:\r\n```\r\nValueError: The tensor named IteratorGetNext:1 does not belong to the authorized list of names ['input_ids', 'attention_mask', 'token_type_ids', 'position_ids', 'head_mask', 'inputs_embeds', 'output_attentions', 'output_hidden_states', 'return_dict', 'labels', 'training'].\r\n```\r\n\r\nWhich is normal because `.fit()` wraps the dataset into an iterator, and then the tensors are renamed accordingly. Thanks @sgugger for asking the question :)",
"Thanks for explaining, I understand better now :-)",
"Ok,just realized it is even worse, the inputs gets an ID, here `IteratorGetNext:1` and `IteratorGetNext:0` but the order of the list is never guaranteed. I'm trying to think to a fix for this.",
"Ok, as long as we are naming the inputs accordingly to the parameters, the order is safe. For example:\r\n```\r\ninput_ids = tf.keras.Input(shape=(128,), dtype='int32', name=\"input_ids\")\r\nattention_mask = tf.keras.Input(shape=(128, ), dtype='int32', name=\"attention_mask\")\r\n\r\nmodel = tf.keras.models.Model(inputs = [input_ids, attention_mask], outputs = ...)\r\n```\r\n\r\nIs perfectly fine and works as expected, but:\r\n```\r\ninput_ids = tf.keras.Input(shape=(128,), dtype='int32')\r\nattention_mask = tf.keras.Input(shape=(128, ), dtype='int32')\r\n\r\nmodel = tf.keras.models.Model(inputs = [input_ids, attention_mask], outputs = ...)\r\n```\r\n\r\nBrings an undefined behavior into the order.\r\n\r\nNevertheless, there is still an issue. Let's imagine this case:\r\n```\r\ninput_embeds = tf.keras.Input(shape=(768,), dtype='float32')\r\nattention_mask = tf.keras.Input(shape=(128, ), dtype='int32')\r\n\r\nmodel = tf.keras.models.Model(inputs = [input_embeds, attention_mask], outputs = ...)\r\n```\r\n\r\nWon't work because internally, the `input_ids` parameter will take the value of the `input_embeds` input. This can be solved by integrating the names of each parameter directly inside the model, but we cannot do this because of a bug in TF <= 2.4, and will be solved in the TF 2.5 release. So as long as this release is not out, we cannot fix this, so we have to live with this bug, even though this is an edge use case.\r\n\r\nWhat do you think?",
"I think we should document that this does not work and encourage users to use named inputs then.",
"I have completed the documentation of the `input_processing` function. Does-it sounds enough as explanation for you?",
"LGTM!",
"LGTM! @LysandreJik feel free to merge if the PR gets your approval!"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
This PR update the behavior of the input. We should not raise an error if the name is not among the parameters but act like if there was no name, this is more elegant and less annoying.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8985/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8985",
"html_url": "https://github.com/huggingface/transformers/pull/8985",
"diff_url": "https://github.com/huggingface/transformers/pull/8985.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8985.patch",
"merged_at": 1607638639000
} |
https://api.github.com/repos/huggingface/transformers/issues/8984 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8984/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8984/comments | https://api.github.com/repos/huggingface/transformers/issues/8984/events | https://github.com/huggingface/transformers/issues/8984 | 759,335,291 | MDU6SXNzdWU3NTkzMzUyOTE= | 8,984 | [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! We really cannot help you with this little information. Please respect the issue template with your environment information, the exact command you used to launch the script, and the full stack trace.\r\n\r\nThank you for your understanding.",
"@rabeehk Did you manage to solve this? Experiencing the same issue fine-tuning mT5 on toy data.",
"yes I did managed, the issue was I needed to set a longer max_length for the decoder.",
"\r\nif you want to debug your codes, go to the place where huggingface computes the final metrics, like bleu, ... and there you can check that prediction max length and targets max-length need to match",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | Hi
When I evaluate the finetune_trainer.py on translation datasets like wmt16-en-cs I always get this error after calling evaluate function, I am using version 3.5.1 of transformer on 1 GPU, this issue is really blokcing me, and happens to be there for all translation datasets I tried, could you give me some ideas on this? thanks
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: (index) >= (0):
Aborted
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8984/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8983 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8983/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8983/comments | https://api.github.com/repos/huggingface/transformers/issues/8983/events | https://github.com/huggingface/transformers/issues/8983 | 759,275,414 | MDU6SXNzdWU3NTkyNzU0MTQ= | 8,983 | BertConfig.id2label use list instead of "int: string" dict | {
"login": "franciszzj",
"id": 16440889,
"node_id": "MDQ6VXNlcjE2NDQwODg5",
"avatar_url": "https://avatars.githubusercontent.com/u/16440889?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/franciszzj",
"html_url": "https://github.com/franciszzj",
"followers_url": "https://api.github.com/users/franciszzj/followers",
"following_url": "https://api.github.com/users/franciszzj/following{/other_user}",
"gists_url": "https://api.github.com/users/franciszzj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/franciszzj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/franciszzj/subscriptions",
"organizations_url": "https://api.github.com/users/franciszzj/orgs",
"repos_url": "https://api.github.com/users/franciszzj/repos",
"events_url": "https://api.github.com/users/franciszzj/events{/privacy}",
"received_events_url": "https://api.github.com/users/franciszzj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | In https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_utils.py#L262, use list instead of "int: string" dict maybe is better.
When we use easydict to replace dict, there will be bugs in updating the `.to_dict()` to other easydict object. Because int object can not be the key in easydict.
Solve method:
use ```["LABEL_{}".format(i) for i in range(num_labels)]``` to replace ```self.id2label = {i: "LABEL_{}".format(i) for i in range(num_labels)}```. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8983/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8982 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8982/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8982/comments | https://api.github.com/repos/huggingface/transformers/issues/8982/events | https://github.com/huggingface/transformers/pull/8982 | 759,178,892 | MDExOlB1bGxSZXF1ZXN0NTM0MjIxNzY4 | 8,982 | [Example] Fix the argument name mismatch in the distillation example | {
"login": "jayparks",
"id": 6487834,
"node_id": "MDQ6VXNlcjY0ODc4MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6487834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayparks",
"html_url": "https://github.com/jayparks",
"followers_url": "https://api.github.com/users/jayparks/followers",
"following_url": "https://api.github.com/users/jayparks/following{/other_user}",
"gists_url": "https://api.github.com/users/jayparks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jayparks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayparks/subscriptions",
"organizations_url": "https://api.github.com/users/jayparks/orgs",
"repos_url": "https://api.github.com/users/jayparks/repos",
"events_url": "https://api.github.com/users/jayparks/events{/privacy}",
"received_events_url": "https://api.github.com/users/jayparks/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This past [PR](https://github.com/huggingface/transformers/pull/6315) replaced the argument name `n_gpu` of the distillation example with `gpus`. This causes a crash in running the example since the rest of the example code still uses the old argument name (`n_gpu`).
This PR solves the issue and get the distillation example to run.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8982/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8982",
"html_url": "https://github.com/huggingface/transformers/pull/8982",
"diff_url": "https://github.com/huggingface/transformers/pull/8982.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8982.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8981 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8981/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8981/comments | https://api.github.com/repos/huggingface/transformers/issues/8981/events | https://github.com/huggingface/transformers/pull/8981 | 759,073,061 | MDExOlB1bGxSZXF1ZXN0NTM0MTI4NDMz | 8,981 | Model templates overhaul | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you all for your reviews. Will do the last changes and update from `LMHead` to `CausalLM`.\r\n\r\n@jplu the GA machines are indeed less beefy than the CircleCI ones (and there's a reason for that, we pay CircleCI but not GA).",
"Closing PR and opening it again because the history is messed up."
] | 1,607 | 1,607 | 1,607 | MEMBER | null | This is the first PR to make the model templates better. It improves the templates themselves, as well as the testing tools around them:
- Re-instantiates the tests in the CI, this time as a separate test.
- Respects the library style, and tests it. These tests ensure that the templates have not diverged from the code base, especially due to the `# Copied from ...`.
- Implements a decoder model, with support for cross attentions, to be used in the encoder-decoder framework.
- Implements the same decoder model in TensorFlow
- Implements multiple types of position embeddings, similarly to BERT.
- Tests every new feature.
- Tokenizer separation between slow and fast
- Soft dependency on `cookiecutter`
- Adds easily tweakable integration tests for both PyTorch and TensorFlow
Things left for overhaul 2 & 3:
- General tokenizer improvements, I'm not happy with their current state and it's not tested. I find it surprisingly difficult to have a template for a tokenizer that is general enough, so I'm probably going to try to cover as much possible use-cases as possible
- Encoder-decoder w/ @patrickvonplaten
Things to improve for this overhaul (1):
- Probably speeding up the model templates test. It's running on github actions right now, and it's quite slow (8 minutes) even though the downloads are cached. Possible options are
- use CircleCI instead
- Cache the whole environment
- Probably others, thinking about it
- The test runs on each commit + when the branch is opened. That's unnecessary. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8981/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8981",
"html_url": "https://github.com/huggingface/transformers/pull/8981",
"diff_url": "https://github.com/huggingface/transformers/pull/8981.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8981.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8980 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8980/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8980/comments | https://api.github.com/repos/huggingface/transformers/issues/8980/events | https://github.com/huggingface/transformers/pull/8980 | 759,063,691 | MDExOlB1bGxSZXF1ZXN0NTM0MTIwNDAz | 8,980 | [wip] [ci] doc-job-skip take #4 dry-run | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"trying to self-heal PR",
"I guess continuing over here: https://github.com/huggingface/transformers/pull/8997",
"ok, succeeded at restoring this PR after force pushed mess up.\r\n\r\nThanks to this recipe: https://gist.github.com/robertpainsi/2c42c15f1ce6dab03a0675348edd4e2c",
"let's please monitor new PRs closely, so that it doesn't somehow break the job while testing things. thank you.",
"> Crazy that even the GitHub API is unreliable on that front.\r\n\r\nI haven't seen it with my own eyes (other than when force pushed history of master was rewritten yesterday), but I found more than one report of it being unreliable on various forums.\r\n\r\nIt's possible that they are referring to the situation when someone force pushes into the PR branch, thus changing the local history which could impact the forking/branching point (== `base.sha`), but github API continues to report the original `base.sha` for awhile - I think based on reports due to caching.\r\n\r\nSo this workaround going into user's branch and derives an up-to-date branching point from their branch - at a cost of needing to clone their forked repo."
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | This is take 4 on attempting to find a reliable way to get a list of modified files of this PR.
Spent the whole day trying many different ideas, none worked. github API for
`https://api.github.com/repos/${CIRCLE_USERNAME}/${CIRCLE_REPO_NAME}/pulls/${CIRCLE_PR_NUMER}"`
is broken. It gives a bogus `base.sha` at times, e.g. when someone force-pushes into master and then you end up with a base.sha which has nothing to do with the fork. on github website everything is valid, but github API gives bogus info.
So after many attempts I give up on trying to get a reliable way via the SHA information provided via github or circleCI.
The only solution that seems to work is to replicate user's original branch on their fork. Cost is about 3secs.
To do that one has to clone user's repo, switch to their branch and find the branching point identical to what `make fixup` does. This is what the latest incarnation of this PR does.
This PR doesn't enable the skipping yet, just reporting what it would have done and dumps the list of modified files so that we could check if we get some edge cases again which this incarnation doesn't cover.
Please have a look and see if it looks safe to merge and then monitor for a while and if all seems in order then we can enable skipping.
This PR will not be able to handle PRs originating from github direct file edit as it can be seen from https://github.com/huggingface/transformers/pull/8997 as CircleCI fails to pass PR number to the job in this situation :( The whole skip check is skipped in that case the job continues normally - we just don't get the saving on direct doc PRs. I'm still trying to see if CircleCI should be providing this data, since according to https://developer.github.com/webhooks/event-payloads/#pull_request this hook should be sending PR number to circleCI.
When I had the idea first little did I know that this trivial one-liner on user-side (we use it in `make fixup`) will turn out to be such a complicated and unreliable thing on CI.
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8980/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8980/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8980",
"html_url": "https://github.com/huggingface/transformers/pull/8980",
"diff_url": "https://github.com/huggingface/transformers/pull/8980.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8980.patch",
"merged_at": 1607546196000
} |
https://api.github.com/repos/huggingface/transformers/issues/8979 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8979/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8979/comments | https://api.github.com/repos/huggingface/transformers/issues/8979/events | https://github.com/huggingface/transformers/pull/8979 | 759,009,934 | MDExOlB1bGxSZXF1ZXN0NTM0MDc3NDQ1 | 8,979 | [training] SAVE_STATE_WARNING was removed in pytorch | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank God they removed that horrible thing! Instead of the complex parsing, maybe a `try`/`except` would be cleaner?",
" no horrible formatting that way! good idea - done.",
"@stas00, could it be possible to apply your fix to the code of version `3.5.X` and release the next **minor** version (like v3.5.2)?",
"@LysandreJik, your help is needed here. I don't know anything about how old branches maintenance is done. \r\n\r\nThis PR was merged in 4x series and @vyshkant is requesting this fix applied to 3.5.x for the next release.\r\n\r\nThank you.",
"No we won't do fixes for old versions, either upgrade to v4 or use PyTorch < 1.8 if you want to stick to v3.5."
] | 1,607 | 1,623 | 1,607 | CONTRIBUTOR | null | `SAVE_STATE_WARNING` has been removed from pytorch 3 days ago: pytorch/pytorch#46813
I had to add redundant ()'s to avoid a terrible auto-formatter outcome.
Fixes: #8232
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8979/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8979",
"html_url": "https://github.com/huggingface/transformers/pull/8979",
"diff_url": "https://github.com/huggingface/transformers/pull/8979.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8979.patch",
"merged_at": 1607407196000
} |
https://api.github.com/repos/huggingface/transformers/issues/8978 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8978/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8978/comments | https://api.github.com/repos/huggingface/transformers/issues/8978/events | https://github.com/huggingface/transformers/issues/8978 | 758,996,639 | MDU6SXNzdWU3NTg5OTY2Mzk= | 8,978 | Deepcopy and pickling fails for modeling_outputs | {
"login": "anirudh2290",
"id": 1522319,
"node_id": "MDQ6VXNlcjE1MjIzMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1522319?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anirudh2290",
"html_url": "https://github.com/anirudh2290",
"followers_url": "https://api.github.com/users/anirudh2290/followers",
"following_url": "https://api.github.com/users/anirudh2290/following{/other_user}",
"gists_url": "https://api.github.com/users/anirudh2290/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anirudh2290/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anirudh2290/subscriptions",
"organizations_url": "https://api.github.com/users/anirudh2290/orgs",
"repos_url": "https://api.github.com/users/anirudh2290/repos",
"events_url": "https://api.github.com/users/anirudh2290/events{/privacy}",
"received_events_url": "https://api.github.com/users/anirudh2290/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @sgugger, the king of model outputs!",
"That was very quick :) Thank you @sgugger and @LysandreJik !"
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Python version: 3.8
- PyTorch version (GPU?): 1.6.0
- Using GPU in script?: N/A
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## To reproduce
Steps to reproduce the behavior:
```
>>> from transformers.modeling_outputs import BaseModelOutput
>>> import torch
>>> import copy
>>> x = BaseModelOutput(last_hidden_state=torch.ones(1,))
>>> z = copy.deepcopy(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/opt/lib/python3.8/copy.py", line 263, in _reconstruct
y = func(*args)
TypeError: __init__() missing 1 required positional argument: 'last_hidden_state'
>>> import pickle
>>> obj = pickle.dumps(x)
>>> obj_loaded = pickle.loads(obj)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() missing 1 required positional argument: 'last_hidden_state'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
No failures when using deepcopy or pickle.dumps/pickle.loads | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8978/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8977 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8977/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8977/comments | https://api.github.com/repos/huggingface/transformers/issues/8977/events | https://github.com/huggingface/transformers/issues/8977 | 758,984,187 | MDU6SXNzdWU3NTg5ODQxODc= | 8,977 | BertForMaskedLM train | {
"login": "juyunsang",
"id": 13113520,
"node_id": "MDQ6VXNlcjEzMTEzNTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/13113520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juyunsang",
"html_url": "https://github.com/juyunsang",
"followers_url": "https://api.github.com/users/juyunsang/followers",
"following_url": "https://api.github.com/users/juyunsang/following{/other_user}",
"gists_url": "https://api.github.com/users/juyunsang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juyunsang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juyunsang/subscriptions",
"organizations_url": "https://api.github.com/users/juyunsang/orgs",
"repos_url": "https://api.github.com/users/juyunsang/repos",
"events_url": "https://api.github.com/users/juyunsang/events{/privacy}",
"received_events_url": "https://api.github.com/users/juyunsang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,607 | 1,607 | 1,607 | NONE | null | I have a question
When training using BertForMaskedLM, is the train data as below correct?
- token2idx
```
<pad> : 0, <mask>: 1, <cls>:2, <sep>:3
```
- max len : 8
- input token
```
<cls> hello i <mask> cats <sep>
```
- input ids
```
[2, 34,45,1,56,3,0,0]
```
- attention_mask
```
[1,1,1,1,1,1,0,0]
```
- labels
```
[-100,-100,-100,64,-100,-100,-100,-100]
```
I wonder if I should also assign -100 to labels for padding token. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8977/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8976 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8976/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8976/comments | https://api.github.com/repos/huggingface/transformers/issues/8976/events | https://github.com/huggingface/transformers/pull/8976 | 758,917,103 | MDExOlB1bGxSZXF1ZXN0NTM0MDAyNzIz | 8,976 | Check table as independent script | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | MEMBER | null | Separated the table check from `check_copies.py` as for the template I need to manage the copies without managing the table. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8976/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8976",
"html_url": "https://github.com/huggingface/transformers/pull/8976",
"diff_url": "https://github.com/huggingface/transformers/pull/8976.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8976.patch",
"merged_at": 1607388913000
} |
https://api.github.com/repos/huggingface/transformers/issues/8975 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8975/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8975/comments | https://api.github.com/repos/huggingface/transformers/issues/8975/events | https://github.com/huggingface/transformers/pull/8975 | 758,913,720 | MDExOlB1bGxSZXF1ZXN0NTMzOTk5OTM2 | 8,975 | Update quicktour docs to showcase the use of truncation | {
"login": "navjotts",
"id": 8072161,
"node_id": "MDQ6VXNlcjgwNzIxNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8072161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/navjotts",
"html_url": "https://github.com/navjotts",
"followers_url": "https://api.github.com/users/navjotts/followers",
"following_url": "https://api.github.com/users/navjotts/following{/other_user}",
"gists_url": "https://api.github.com/users/navjotts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/navjotts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/navjotts/subscriptions",
"organizations_url": "https://api.github.com/users/navjotts/orgs",
"repos_url": "https://api.github.com/users/navjotts/repos",
"events_url": "https://api.github.com/users/navjotts/events{/privacy}",
"received_events_url": "https://api.github.com/users/navjotts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
Currently, running the tokenizer batch example on https://huggingface.co/transformers/quicktour.html gives an error
```
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
```
This PR fixes the above by passing the `max_length` param explicitly (instead of depending on it having a default, which might not be the case for all models).
The fix also adds clarity to the statement in the docs above this example
> If your goal is to send them through your model as a batch, you probably want to pad them all to the same length, truncate them to the maximum length the model can accept and get tensors back
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8975/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8975",
"html_url": "https://github.com/huggingface/transformers/pull/8975",
"diff_url": "https://github.com/huggingface/transformers/pull/8975.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8975.patch",
"merged_at": 1607384139000
} |
https://api.github.com/repos/huggingface/transformers/issues/8974 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8974/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8974/comments | https://api.github.com/repos/huggingface/transformers/issues/8974/events | https://github.com/huggingface/transformers/pull/8974 | 758,909,846 | MDExOlB1bGxSZXF1ZXN0NTMzOTk2Nzg5 | 8,974 | Add option to only check copies | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing in favor of #8976 "
] | 1,607 | 1,607 | 1,607 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8974/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8974",
"html_url": "https://github.com/huggingface/transformers/pull/8974",
"diff_url": "https://github.com/huggingface/transformers/pull/8974.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8974.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8973 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8973/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8973/comments | https://api.github.com/repos/huggingface/transformers/issues/8973/events | https://github.com/huggingface/transformers/pull/8973 | 758,891,093 | MDExOlB1bGxSZXF1ZXN0NTMzOTgxMjc4 | 8,973 | Small fix to the run clm script | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
@LysandreJik pointed out that the scripts will fail with a cryptic error if the tokenizer `model_max_len` is huge and no `block_size` is set. This PR fixes this by clipping the `block_size` to 1024 (when no value is passed). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8973/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8973",
"html_url": "https://github.com/huggingface/transformers/pull/8973",
"diff_url": "https://github.com/huggingface/transformers/pull/8973.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8973.patch",
"merged_at": 1607380330000
} |
https://api.github.com/repos/huggingface/transformers/issues/8972 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8972/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8972/comments | https://api.github.com/repos/huggingface/transformers/issues/8972/events | https://github.com/huggingface/transformers/pull/8972 | 758,823,132 | MDExOlB1bGxSZXF1ZXN0NTMzOTI3ODgz | 8,972 | Removed unused `encoder_hidden_states` and `encoder_attention_mask` | {
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik I had to remove some tests that were testing the decoder mode for MobileBert. \r\n\r\nOne test still fails (flax), the error seems unrelated to this PR unless I am missing something?"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
This PR removes unused `encoder_hidden_states` and `encoder_attention_mask` from MobileBert forward methods. These are use for decoder models, but MobileBert does not include a cross-attention mechanism.
Fixes https://github.com/huggingface/transformers/issues/8969
## Who can review?
albert, bert, XLM: @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8972/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8972",
"html_url": "https://github.com/huggingface/transformers/pull/8972",
"diff_url": "https://github.com/huggingface/transformers/pull/8972.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8972.patch",
"merged_at": 1607447075000
} |
https://api.github.com/repos/huggingface/transformers/issues/8971 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8971/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8971/comments | https://api.github.com/repos/huggingface/transformers/issues/8971/events | https://github.com/huggingface/transformers/pull/8971 | 758,766,997 | MDExOlB1bGxSZXF1ZXN0NTMzODgyNjgy | 8,971 | MPNet: Masked and Permuted Pre-training for Language Understanding | {
"login": "StillKeepTry",
"id": 6577458,
"node_id": "MDQ6VXNlcjY1Nzc0NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6577458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StillKeepTry",
"html_url": "https://github.com/StillKeepTry",
"followers_url": "https://api.github.com/users/StillKeepTry/followers",
"following_url": "https://api.github.com/users/StillKeepTry/following{/other_user}",
"gists_url": "https://api.github.com/users/StillKeepTry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StillKeepTry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StillKeepTry/subscriptions",
"organizations_url": "https://api.github.com/users/StillKeepTry/orgs",
"repos_url": "https://api.github.com/users/StillKeepTry/repos",
"events_url": "https://api.github.com/users/StillKeepTry/events{/privacy}",
"received_events_url": "https://api.github.com/users/StillKeepTry/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten I have added an integration test with a pre-trained weight in [https://github.com/StillKeepTry/transformers/blob/dfc18d59da04c38723553354ee1799ce204f52c8/tests/test_modeling_mpnet.py#L240](https://github.com/StillKeepTry/transformers/blob/dfc18d59da04c38723553354ee1799ce204f52c8/tests/test_modeling_mpnet.py#L240)\r\n\r\n",
"> @patrickvonplaten I have added an integration test with a pre-trained weight in https://github.com/StillKeepTry/transformers/blob/dfc18d59da04c38723553354ee1799ce204f52c8/tests/test_modeling_mpnet.py#L240\r\n\r\nThat's awesome thanks a lot!",
"Think you need to run `make style` and then the last test should pass as well :-)",
"@jplu \r\n\r\nI have updated the inputs handling in the TF file now, and rebase and fix the conflicting files. \r\n\r\nBesides, I have used `make style` multiple times but it still reports an error in `check_code_quality`. And I have checked and it seems the problem is not from my added part in the [https://github.com/StillKeepTry/transformers/blob/master/src/transformers/__init__.py](https://github.com/StillKeepTry/transformers/blob/master/src/transformers/__init__.py), despite it reports an error. ",
"@LysandreJik Thanks for pointing out this problem. I have fixed it.",
"The `make style` issue is probably because of the isort version installed, maybe you can try uninstalling black/isort and doing the following at the root of the repo:\r\n```\r\npip uninstall black isort\r\npip install -e .[quality]\r\n```\r\nIf you want I can run `make style` and push on your branch so that it's ready to be merged.",
"@LysandreJik Thank you. You are right. It is because I have installed black and isort before. ",
"Great, thanks! The quality test really doesn't like you haha! \r\n\r\nThis time I think it's because of the \"# Copied from xxx ...\" which still uses the old scheme (like `transformers.modeling_roberta.RobertaLMHead`) instead of the new scheme (like `transformers.models.roberta.modeling_roberta.RobertaLMHead`).",
"It seems ok now :) ...",
"@jplu I have fixed your comments now.",
"Thanks!!\r\n\r\n@sgugger @patrickvonplaten @LysandreJik I'm seeing that the `TFMPNetForPreTraining` and `MPNetForPreTraining` are missing from the TF and PT file. Should they be added? Otherwise it is fine for me :)",
"> Thanks!!\r\n> \r\n> @sgugger @patrickvonplaten @LysandreJik I'm seeing that the `TFMPNetForPreTraining` and `MPNetForPreTraining` are missing from the TF and PT file. Should they be added? Otherwise it is fine for me :)\r\n\r\nI observe that some models also lack `TFXXXForPreTraining` and `XXXForPreTraining`. I am willing to add them in the next stage. ",
"Hey @StillKeepTry, \r\n\r\nwe are super sorry, we had a problem yesterday with git and this is why your git history is cluttered with wrong commits earlier. I cleaned your PR and pushed it to a new branch on master here: https://github.com/huggingface/transformers/pull/9004 . \r\nIt should include all the commits you had earlier. I think we all gave our thumbs-up, so we could merge the other pull request to master (which would require the least amount of work from your side). \r\n\r\nHowever if you want to be the main author of the PR (which is 100% understandable and which is what I would want!), can you do the following steps to open a new clean PR which was exactly like before:\r\n\r\nIn your repo (https://github.com/StillKeepTry/transformers), assuming that the remote to the original hugging face repo (https://github.com/huggingface/transformers.git) is called `upstream`:\r\n\r\n```\r\n$ git fetch upstream\r\n$ git checkout upstream/master\r\n$ git checkout -b add_mp_net_new\r\n# now we'll cherry pick all of your commits\r\n$ git cherry-pick 7361516^..78dcc71\r\n$ git push\r\n# => now you should be able to open new PR with exactly the commits you had previously\r\n```\r\n\r\nLemme know if you need help doing this (or if you don't mind merging https://github.com/huggingface/transformers/pull/9004 - but it would be fairer to you if you're also officially the main author!).\r\n\r\nBig sorry again!",
"@patrickvonplaten Never mind, just use your PR. I am ok if our work can be merged into the master quickly. ",
"Hello! I can not see the data collator for permuted and masked language models. Was it added also inside HuggingFace? There is an already proposed way to do this collator inside the trainer? \r\n\r\nThanks!",
"@gaceladri we have an example for permutation language modeling, check it out here: https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_plm.py",
"Hi @LysandreJik, thank you for your kind response. This data collator that you pointed me out, is the collator from permuted language model used in XLNet right? I am unsure that this is a collator to replicate MPNet that mask tokens, not indices and also do the permutation. Sure that I am misunderstanding something..."
] | 1,607 | 1,621 | 1,607 | CONTRIBUTOR | null | # Model addition
[MPNet](https://arxiv.org/abs/2004.09297)
## Model description
MPNet introduces a novel self-supervised objective named masked and permuted language modeling for language understanding. It inherits the advantages of both the masked language modeling (MLM) and the permuted language modeling (PLM) to addresses the limitations of MLM/PLM, and further reduce the inconsistency between the pre-training and fine-tuning paradigms.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8971/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8971",
"html_url": "https://github.com/huggingface/transformers/pull/8971",
"diff_url": "https://github.com/huggingface/transformers/pull/8971.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8971.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8970 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8970/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8970/comments | https://api.github.com/repos/huggingface/transformers/issues/8970/events | https://github.com/huggingface/transformers/pull/8970 | 758,740,966 | MDExOlB1bGxSZXF1ZXN0NTMzODYzMDgw | 8,970 | Copyright | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Merging to avoid merge conflicts. Can address comments in a follow-up PR."
] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
This PR adds a copyright in any file missing, or fixes the copyright in some of the files to include HuggingFace when missing. We should be vigilant when new files are added to make sure they get one @LysandreJik and @patrickvonplaten
I've excluded the examples folder as I'll do the copyright addition along with the cleaning. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8970/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8970",
"html_url": "https://github.com/huggingface/transformers/pull/8970",
"diff_url": "https://github.com/huggingface/transformers/pull/8970.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8970.patch",
"merged_at": 1607384194000
} |
https://api.github.com/repos/huggingface/transformers/issues/8969 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8969/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8969/comments | https://api.github.com/repos/huggingface/transformers/issues/8969/events | https://github.com/huggingface/transformers/issues/8969 | 758,705,347 | MDU6SXNzdWU3NTg3MDUzNDc= | 8,969 | MobileBERT decoder capabilities | {
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You are right, there are never used. These seem to have been added by mistake during the original implementation. I would be all for a cleanup, as long as it doesn't touch to anything other than these attributes."
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | The current input parameters for MobileBERT indicate that the model may be used in a decoder setting. However, the model architecture does not contain a cross-attention mechanism and several inputs to the model are effectively never used: `encoder_hidden_states` and `encoder_attention_mask`.
This can be seen in:
- https://github.com/huggingface/transformers/blob/de6befd41f3986c68f4af302761b627cb6519eb7/src/transformers/models/mobilebert/modeling_mobilebert.py#L247, where these 2 inputs are not used
- https://github.com/huggingface/transformers/blob/de6befd41f3986c68f4af302761b627cb6519eb7/src/transformers/models/mobilebert/modeling_mobilebert.py#L330, where these inputs are just passed to the previous forward function (where they have no impact)
- https://github.com/huggingface/transformers/blob/de6befd41f3986c68f4af302761b627cb6519eb7/src/transformers/models/mobilebert/modeling_mobilebert.py#L496, where these parameters are not used (not even passed to the `MobileBertAttention`)
- https://github.com/huggingface/transformers/blob/de6befd41f3986c68f4af302761b627cb6519eb7/src/transformers/models/mobilebert/modeling_mobilebert.py#L552 where they are passed to the `MobileBertLayer` described above (therefore without impact)
- https://github.com/huggingface/transformers/blob/de6befd41f3986c68f4af302761b627cb6519eb7/src/transformers/models/mobilebert/modeling_mobilebert.py#L847 where they will trigger some reshaping of the attention mask, but eventually not get used.
I believe these unused inputs make the code more difficult to follow and potentially misleading (I don't believe the model can actually be used as a decoder).
Would you be generally supportive of a cleanup of the MobileBERT architecture to reflect its current capabilities? I'd be happy to share a PR but I wanted to check your general thoughts on this.
Thank you,
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
(did not find anyone for MobileBert? But this is relevant for package maintenance I believe you may be the right person for this)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8969/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8968 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8968/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8968/comments | https://api.github.com/repos/huggingface/transformers/issues/8968/events | https://github.com/huggingface/transformers/issues/8968 | 758,696,369 | MDU6SXNzdWU3NTg2OTYzNjk= | 8,968 | 02-transformery.ipynb - output from model only strings 'last_hidden_state', 'pooler_output' | {
"login": "tc64",
"id": 1556665,
"node_id": "MDQ6VXNlcjE1NTY2NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1556665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tc64",
"html_url": "https://github.com/tc64",
"followers_url": "https://api.github.com/users/tc64/followers",
"following_url": "https://api.github.com/users/tc64/following{/other_user}",
"gists_url": "https://api.github.com/users/tc64/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tc64/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tc64/subscriptions",
"organizations_url": "https://api.github.com/users/tc64/orgs",
"repos_url": "https://api.github.com/users/tc64/repos",
"events_url": "https://api.github.com/users/tc64/events{/privacy}",
"received_events_url": "https://api.github.com/users/tc64/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Exactly same issue while using transformers. I'm using Pytorch 1.7. The solution presented solved the issue"
] | 1,607 | 1,607 | 1,607 | NONE | null | Running in NVIDIA Docker Container: nvcr.io/nvidia/pytorch:20.11-py3
Pytorch version: 1.8.0a0+17f8c32
transformers version: 4.0.0-rc-1
Python version: 3.6.10 |Anaconda, Inc.| (default, May 8 2020, 02:54:21) [GCC 7.3.0]
When running through transformers/notebooks/02-transformers.ipynb, I see the following output at this point:
```python
outputs, pooled = model(tokens_pt)
print("Token wise output: {}, Pooled output: {}".format(outputs.shape, pooled.shape))
```
output for this part:
```
AttributeError Traceback (most recent call last)
<ipython-input-47-cda4654dfa83> in <module>
20 #outputs = model_outs[0]
21 #pooled = model_outs[1]
---> 22 print("Token wise output: {}, Pooled output: {}".format(outputs.shape, pooled.shape))
AttributeError: 'str' object has no attribute 'shape'
```
This is because the values of `outputs` and `pooled` are strings, `"last_hidden_state"` and "pooler_output", respectively.
However, the following change (comment out original line and replace it with version where `model` output is captured in a single object whose 0 and 1 indices are accessed) produces the desired result:
```
#outputs, pooled = model(tokens_pt)
model_outs = model(tokens_pt)
outputs = model_outs[0]
pooled = model_outs[1]
print("Token wise output: {}, Pooled output: {}".format(outputs.shape, pooled.shape))
```
I am not sure if this is a PyTorch version thing or what, but was hoping to either get some insight or alert you to something coming up when upgrading to the latest pytorch. Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8968/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8968/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8967 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8967/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8967/comments | https://api.github.com/repos/huggingface/transformers/issues/8967/events | https://github.com/huggingface/transformers/issues/8967 | 758,695,096 | MDU6SXNzdWU3NTg2OTUwOTY= | 8,967 | EncoderDecoderModel works poorly with Mlflow | {
"login": "alexyalunin",
"id": 23011284,
"node_id": "MDQ6VXNlcjIzMDExMjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/23011284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexyalunin",
"html_url": "https://github.com/alexyalunin",
"followers_url": "https://api.github.com/users/alexyalunin/followers",
"following_url": "https://api.github.com/users/alexyalunin/following{/other_user}",
"gists_url": "https://api.github.com/users/alexyalunin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexyalunin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexyalunin/subscriptions",
"organizations_url": "https://api.github.com/users/alexyalunin/orgs",
"repos_url": "https://api.github.com/users/alexyalunin/repos",
"events_url": "https://api.github.com/users/alexyalunin/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexyalunin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Thanks for your proposed solution @alexyalunin ! \r\n\r\nI don't really think that this is a problem on our side. I think MLFlow should better handle this no? ",
"Probably, but since you use mlflow inside your library people might expect it working with your models. I won't open the issue in mlflow repo, I leave it to someone who encounters this error again. @patrickvonplaten You can close this issue then. ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
## Description
Mlflow keeps track of model config parameters, EncoderDecoderConfig has `encoder` and `decoder` parameters which are basically configs of encoder and decoder. They are converted first `to_dict` and then to string `str()` by mlflow resulting in a long string that can not be fit. The `MAX_PARAM_VAL_LENGTH` of Mlflow is set to 250 and AFAIK can not be changed.
This result in an error:
`MlflowException: Param value '{'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'use_bfloat16': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'is_encoder_decoder': False, 'is_decoder': False, 'add_cross_attention': Fa' had length 1669, which exceeded length limit of 250`
My solution for training phase is:
```
class DummyClass:
def to_dict(self):
return {}
model.config.encoder = DummyClass()
model.config.decoder = DummyClass()
```
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8967/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8966 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8966/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8966/comments | https://api.github.com/repos/huggingface/transformers/issues/8966/events | https://github.com/huggingface/transformers/issues/8966 | 758,669,678 | MDU6SXNzdWU3NTg2Njk2Nzg= | 8,966 | Make loss function an init parameter | {
"login": "Querela",
"id": 1648294,
"node_id": "MDQ6VXNlcjE2NDgyOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1648294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Querela",
"html_url": "https://github.com/Querela",
"followers_url": "https://api.github.com/users/Querela/followers",
"following_url": "https://api.github.com/users/Querela/following{/other_user}",
"gists_url": "https://api.github.com/users/Querela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Querela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Querela/subscriptions",
"organizations_url": "https://api.github.com/users/Querela/orgs",
"repos_url": "https://api.github.com/users/Querela/repos",
"events_url": "https://api.github.com/users/Querela/events{/privacy}",
"received_events_url": "https://api.github.com/users/Querela/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"While I understand where you're coming from, why not define the loss function outside of the model, like it's generally done for PyTorch models? If you do not pass the labels, you get the logits from the model. You can use these logits with your labels to compute any loss you want.",
"Ok. Good I asked first because I did not know this general practice. 😄 \r\n\r\nSo, if I want to use the `Trainer`, I will then be required to only override:\r\nhttps://github.com/huggingface/transformers/blob/9d7d0005b046a95d9d59354714bb6c3547a612fe/src/transformers/trainer.py#L1114-L1120\r\n→ I would need to split the labels from the input, feed it into the model as usual and then compute the loss afterwards manually. Or just ignore the default computed loss and compute my own loss myself and override it.\r\n\r\nMy own loss computation can then still be like this:\r\nhttps://github.com/huggingface/transformers/blob/9d7d0005b046a95d9d59354714bb6c3547a612fe/src/transformers/models/bert/modeling_bert.py#L1383-L1391\r\n\r\nI think this is even easier. Thank you. Not sure why I did not see this ...",
"Yes, the trainer has a `compute_loss` method that is simple to override for that exact purpose. Glad you're satisfied with the outcome, and thanks for opening such a detailed issue.",
"I stumbled over this exact method when the `Trainer` was introduced but did not realize that the models still return the raw logits that I then can use for custom loss computation ...\r\nWell, I try to research before opening issues, and looking through the source code often helps understand some details but the code base keeps growing and changing, so it's sometimes hard to keep up and not miss some obvious things.\r\n😄 "
] | 1,607 | 1,607 | 1,607 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
I would like to request an optional init parameter of `*Model` (e. g. BertModel) that allows the user to provide an own loss function for training. If it is None, it will fall back to the default implementations.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Main motivation is ease of use for the user.
Say for example, I want to change the default `CrossEntropyLoss` in a non-binary classification with `BertForSequenceClassification`, I have to override `forward` in a subclass or write my own class.
Now I will have to do this for all model types (RoBERTa, XLNet ...) I want to also check out. Because of subclassing I won't be able to use `AutoModelForSequenceClassification.from_pretrained`.
Suppose, this is an init parameter with default `None`. Any model can be instanciated, and in the `forward` method the default loss functions will be used if not explicitely a custom loss function (factory) is being provided.
```python
def forward(...):
# ...
loss = None
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
```
to
```python
def __init__(..., loss_fct_cls =None):
# ....
self.loss_fct_cls = loss_fct_cls
def forward(...):
# ...
loss = None
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss() if not self.loss_fct_cls else self.loss_fct_cls()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = CrossEntropyLoss() if not self.loss_fct_cls else self.loss_fct_cls()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
# ....
# usage
config = AutoConfig.from_pretrained("bert-base-uncased", num_labels=2)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", config=config, loss_fct_cls=torch.nn.BCELossWithLogitsLoss)
```
The user still has to be careful to use the correct loss function to match the number of labels for example but he can easier transition between models. The the overhead in performance and adapting a optional custom loss to each model should not be that high. It is just another hyperparameter that further allows for customization in using the models.
This might also allow for easier multi-class multi-label classifications, as it now is more for multi-class single-label, isn't it?
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
In case the feature request will not immediately be denied because of reasons (not sure which?), I can start extending the existing models to allow for using an optional loss function.
I'm just not sure what the final parameter name should be because changing it can probably be done with `sed`/search-replace but doing it right the first time is just being efficient. I'm also not sure whether to store it in the config object or in the model as an attribute. (For performance reasons, it would still cache it as a model property for slightly faster access but I did not design the whole library, so I might be wrong on my thoughts.)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8966/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8966/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8965 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8965/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8965/comments | https://api.github.com/repos/huggingface/transformers/issues/8965/events | https://github.com/huggingface/transformers/pull/8965 | 758,646,008 | MDExOlB1bGxSZXF1ZXN0NTMzNzg1MzI1 | 8,965 | Remove sourcerer | {
"login": "clmnt",
"id": 821155,
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clmnt",
"html_url": "https://github.com/clmnt",
"followers_url": "https://api.github.com/users/clmnt/followers",
"following_url": "https://api.github.com/users/clmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clmnt/subscriptions",
"organizations_url": "https://api.github.com/users/clmnt/orgs",
"repos_url": "https://api.github.com/users/clmnt/repos",
"events_url": "https://api.github.com/users/clmnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/clmnt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | MEMBER | null | # What does this PR do?
Removes sourcerer from the readme
## Before submitting
- [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
documentation: @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8965/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8965",
"html_url": "https://github.com/huggingface/transformers/pull/8965",
"diff_url": "https://github.com/huggingface/transformers/pull/8965.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8965.patch",
"merged_at": 1607357730000
} |
https://api.github.com/repos/huggingface/transformers/issues/8964 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8964/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8964/comments | https://api.github.com/repos/huggingface/transformers/issues/8964/events | https://github.com/huggingface/transformers/pull/8964 | 758,626,689 | MDExOlB1bGxSZXF1ZXN0NTMzNzY5NDU5 | 8,964 | Create README.md | {
"login": "wietsedv",
"id": 13139101,
"node_id": "MDQ6VXNlcjEzMTM5MTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/13139101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wietsedv",
"html_url": "https://github.com/wietsedv",
"followers_url": "https://api.github.com/users/wietsedv/followers",
"following_url": "https://api.github.com/users/wietsedv/following{/other_user}",
"gists_url": "https://api.github.com/users/wietsedv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wietsedv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wietsedv/subscriptions",
"organizations_url": "https://api.github.com/users/wietsedv/orgs",
"repos_url": "https://api.github.com/users/wietsedv/repos",
"events_url": "https://api.github.com/users/wietsedv/events{/privacy}",
"received_events_url": "https://api.github.com/users/wietsedv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add model card.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8964/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8964",
"html_url": "https://github.com/huggingface/transformers/pull/8964",
"diff_url": "https://github.com/huggingface/transformers/pull/8964.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8964.patch",
"merged_at": 1607375054000
} |
https://api.github.com/repos/huggingface/transformers/issues/8963 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8963/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8963/comments | https://api.github.com/repos/huggingface/transformers/issues/8963/events | https://github.com/huggingface/transformers/issues/8963 | 758,621,461 | MDU6SXNzdWU3NTg2MjE0NjE= | 8,963 | PegasusTokenizer requires the SentencePiece library but it was not found in your environment | {
"login": "marcoabrate",
"id": 43387597,
"node_id": "MDQ6VXNlcjQzMzg3NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/43387597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcoabrate",
"html_url": "https://github.com/marcoabrate",
"followers_url": "https://api.github.com/users/marcoabrate/followers",
"following_url": "https://api.github.com/users/marcoabrate/following{/other_user}",
"gists_url": "https://api.github.com/users/marcoabrate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcoabrate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcoabrate/subscriptions",
"organizations_url": "https://api.github.com/users/marcoabrate/orgs",
"repos_url": "https://api.github.com/users/marcoabrate/repos",
"events_url": "https://api.github.com/users/marcoabrate/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcoabrate/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @marcoabrate, \r\nI tried to reproduce your error in this colab without success: https://colab.research.google.com/drive/1nBCEtP773LplNodOSw5OBW-rJ84gizW5?usp=sharing can you check again ?",
"You are right @patrickvonplaten \r\nTo reproduce\r\n\r\n```\r\n!pip install -U transformers\r\n\r\nfrom transformers import PegasusTokenizer\r\ntokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')\r\n\r\n!pip install sentencepiece\r\n\r\ntokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')\r\n```\r\n\r\nVery weird",
"Having the same issue. Although solely in my Jupyter notebook. Running the code from a file is working fine..",
"Probably it's a way the Jupyter kernel works. Indeed, if you restart the kernel and install `sentencepiece` before is working.",
"as @marcoabrate said, restated kernel at my project and without code changes everything started working",
"I can confirm that this solution is still valid today. I encountered the same issue today.\r\nI solved it by adding ` !pip install sentencepiece` and then fully restart the Jupiter environment and rerun.",
"I am unable to install the SentencePiece library, this is the error i get when i do pip3 install sentencepiece:\r\n\r\nerror: legacy-install-failure\r\n",
"Restarting the kernel and using\r\n!pip install Transformers==3.2.0 instead of !pip install Transformers, worked for me",
"After using !pip install sentencepiece just restart the kernel and run the cells. It will work fine\r\n"
] | 1,607 | 1,693 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Platform: Google Colab
- Python version: 3.6.9
### Who can help
tokenizers: @mfuntowicz
Pegasus: @patrickvonplaten
## To reproduce
Steps to reproduce the behavior:
```
!pip install -U transformers
!pip install sentencepiece
from transformers import PegasusTokenizer
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
```
Error:
```
ImportError Traceback (most recent call last)
<ipython-input-7-12d68b5e397b> in <module>()
1 from transformers import PegasusTokenizer
----> 2 tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
/usr/local/lib/python3.6/dist-packages/transformers/utils/dummy_sentencepiece_objects.py in from_pretrained(self, *args, **kwargs)
54 @classmethod
55 def from_pretrained(self, *args, **kwargs):
---> 56 requires_sentencepiece(self)
57
58
/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in requires_sentencepiece(obj)
459 name = obj.__name__ if hasattr(obj, "__name__") else obj.__class__.__name__
460 if not is_sentencepiece_available():
--> 461 raise ImportError(SENTENCEPIECE_IMPORT_ERROR.format(name))
462
463
ImportError:
PegasusTokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones
that match your environment.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8963/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8962 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8962/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8962/comments | https://api.github.com/repos/huggingface/transformers/issues/8962/events | https://github.com/huggingface/transformers/pull/8962 | 758,596,358 | MDExOlB1bGxSZXF1ZXN0NTMzNzQ0Mjg5 | 8,962 | Use word_ids to get labels in run_ner | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
As #8958 pointed out, the current way labels are computed in the `run_ner` script using offset mappings does not work for sentencepiece-based tokenizers. This PR fixes that using the `.word_ids` method which is more elegant and more reliable.
In passing it adds an early check that the tokenzier is fast (otherwise the script just doesn't work).
Fixes #8958
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8962/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8962",
"html_url": "https://github.com/huggingface/transformers/pull/8962",
"diff_url": "https://github.com/huggingface/transformers/pull/8962.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8962.patch",
"merged_at": 1607369197000
} |
https://api.github.com/repos/huggingface/transformers/issues/8961 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8961/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8961/comments | https://api.github.com/repos/huggingface/transformers/issues/8961/events | https://github.com/huggingface/transformers/pull/8961 | 758,596,351 | MDExOlB1bGxSZXF1ZXN0NTMzNzQ0Mjgz | 8,961 | Optional layers | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for working on this @jplu. I think we should take the opportunity to think about this issue: https://github.com/huggingface/transformers/issues/8793.\r\n\r\nThe problem with the `add_pooling_layer` option and how it's currently done in PyTorch models is that when doing a training initialized from a model checkpoints that *contains the pooling layer*, like `bert-base-cased`:\r\n\r\n```py\r\nmodel = BertForMaskedLM.from_pretrained(\"bert-base-cased\")\r\n# Fine-tune the model on an MLM task\r\n```\r\n\r\nwe're losing the pooling layer doing so. It's not a big deal here as we're doing an MLM task, however, if we want to use that model for a downstream task:\r\n\r\n```py\r\nmodel.save_pretrained(\"bert-base-cased-finetuned-mlm\")\r\nclassifier_model = BertForSequenceClassification.from_pretrained(\"bert-base-cased-finetuned-mlm\")\r\n```\r\nwe're now having a classifier model that has a randomly initialized pooling layer, whereas the weights that were stored in the `bert-base-cased` original checkpoint would have been better than a randomly initialized layer.\r\n\r\nThe issue is that right now, we have no way of specifying if we want to keep the pooling layer or not in such a setup. I would argue that controlling it from the configuration would really be useful here, rather than setting it to `add_pooling_layer=False` in architectures that do not need it.\r\n\r\ncc @jplu @sgugger @patrickvonplaten ",
"Indeed, it starts to be more complicated than we thought at the beginning, but the case you are raising is a very good one!!\r\n\r\nI think that controlling this from the config to have the same behavior would be more flexible, I +1 this proposal!",
"> Thanks for working on this @jplu. I think we should take the opportunity to think about this issue: #8793.\r\n> \r\n> The problem with the `add_pooling_layer` option and how it's currently done in PyTorch models is that when doing a training initialized from a model checkpoints that _contains the pooling layer_, like `bert-base-cased`:\r\n> \r\n> ```python\r\n> model = BertForMaskedLM.from_pretrained(\"bert-base-cased\")\r\n> # Fine-tune the model on an MLM task\r\n> ```\r\n> \r\n> we're losing the pooling layer doing so. It's not a big deal here as we're doing an MLM task, however, if we want to use that model for a downstream task:\r\n> \r\n> ```python\r\n> model.save_pretrained(\"bert-base-cased-finetuned-mlm\")\r\n> classifier_model = BertForSequenceClassification.from_pretrained(\"bert-base-cased-finetuned-mlm\")\r\n> ```\r\n> \r\n> we're now having a classifier model that has a randomly initialized pooling layer, whereas the weights that were stored in the `bert-base-cased` original checkpoint would have been better than a randomly initialized layer.\r\n> \r\n> The issue is that right now, we have no way of specifying if we want to keep the pooling layer or not in such a setup. I would argue that controlling it from the configuration would really be useful here, rather than setting it to `add_pooling_layer=False` in architectures that do not need it.\r\n> \r\n> cc @jplu @sgugger @patrickvonplaten\r\n\r\nI remember that we were thinking about adding a config param for `add_pooling_layer` for PT: https://github.com/huggingface/transformers/pull/7272 and decided not to. I still think the cleaner solution is to **not** add a config param because it's a very weird use-case IMO. Why wouldn't the user just use a `BertForPreTraining` model for his use case? But I'm also fine with adding a config param instead. It's not a big deal to me...but in this case I'd definitely prefer to not add it to the general `PretrainedConfig`, but to each model's config.",
"Good point regarding the `BertForPreTraining`. I think this is a use-case (you want to keep a layer from another architecture) where you would want to build your own architectures for that, to have complete control over the layers.\r\n\r\nI think we might be missing some documentation on how to do that, and on how creating an architecture that inherits from `PreTrainedModel` works, but this is a discussion for another time.\r\n\r\nOk to keep it this way.",
"LGTM for me!"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
This PR adds the possibility to have optional layers in the models thanks to the new input/output process. Here the pooling layer is created or not for the BERT/ALBERT/Longformer/MobileBERT/Roberta models. The keys to ignore when loading for these layers has been updated in same time. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8961/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8961/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8961",
"html_url": "https://github.com/huggingface/transformers/pull/8961",
"diff_url": "https://github.com/huggingface/transformers/pull/8961.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8961.patch",
"merged_at": 1607436850000
} |
https://api.github.com/repos/huggingface/transformers/issues/8960 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8960/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8960/comments | https://api.github.com/repos/huggingface/transformers/issues/8960/events | https://github.com/huggingface/transformers/issues/8960 | 758,562,562 | MDU6SXNzdWU3NTg1NjI1NjI= | 8,960 | TFBertModel NOT learning at all! | {
"login": "ivankrstev7",
"id": 48191509,
"node_id": "MDQ6VXNlcjQ4MTkxNTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/48191509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivankrstev7",
"html_url": "https://github.com/ivankrstev7",
"followers_url": "https://api.github.com/users/ivankrstev7/followers",
"following_url": "https://api.github.com/users/ivankrstev7/following{/other_user}",
"gists_url": "https://api.github.com/users/ivankrstev7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivankrstev7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivankrstev7/subscriptions",
"organizations_url": "https://api.github.com/users/ivankrstev7/orgs",
"repos_url": "https://api.github.com/users/ivankrstev7/repos",
"events_url": "https://api.github.com/users/ivankrstev7/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivankrstev7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,607 | 1,607 | 1,607 | NONE | null | Hi, i am trying to implement a simple Keras model where the first inputs are the input_ids and the attention_mask and then i have a `TFBertModel.from_pretrained('bert-base-uncased')` layer to extract the word embeddings and everything compiles okay, but when I train the model I get a constant accuracy of 0.5 (it is a binary classification problem).
Here is how I've defined my model:

And I am using `BertTokenizer.from_pretrained('bert-base-uncased')` to prepare the dataset. I might also have a problem with how i feed the data to the model, I am not sure, so here is a scr of that too:

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8960/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8959 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8959/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8959/comments | https://api.github.com/repos/huggingface/transformers/issues/8959/events | https://github.com/huggingface/transformers/issues/8959 | 758,551,863 | MDU6SXNzdWU3NTg1NTE4NjM= | 8,959 | FileNotFoundError: [Errno 2] No such file or directory: 'cached_train_BertTokenizer_180.lock' | {
"login": "Stimmot",
"id": 29411999,
"node_id": "MDQ6VXNlcjI5NDExOTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/29411999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Stimmot",
"html_url": "https://github.com/Stimmot",
"followers_url": "https://api.github.com/users/Stimmot/followers",
"following_url": "https://api.github.com/users/Stimmot/following{/other_user}",
"gists_url": "https://api.github.com/users/Stimmot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Stimmot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Stimmot/subscriptions",
"organizations_url": "https://api.github.com/users/Stimmot/orgs",
"repos_url": "https://api.github.com/users/Stimmot/repos",
"events_url": "https://api.github.com/users/Stimmot/events{/privacy}",
"received_events_url": "https://api.github.com/users/Stimmot/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Could you provide the information related to your environment, as well as the command that you used to launch the script, as it's requested in the issue template? Thank you.",
"Yes sure!\r\n\r\n- `transformers` version: 3.5.1\r\n- Platform: Linux-5.9.1-kd-cluster-x86_64-with-glibc2.10\r\n- Python version: 3.8.0\r\n- PyTorch version (GPU?): 1.7.0 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\r\n\r\nModel I am using: BERT, specifically \"bert-base-german-cased\"\r\n\r\nThe problem arises when using:\r\n* [x] the official example scripts: (give details below)\r\n* [ ] my own modified scripts: (give details below)\r\n\r\nThe tasks I am working on is:\r\n* [ ] an official GLUE/SQUaD task: (give the name)\r\n* [x] my own task or dataset: (give details below)\r\n\r\nTraceback:\r\n`Traceback (most recent call last):\r\n File \"run_ner.py\", line 324, in <module>\r\n main()\r\n File \"run_ner.py\", line 187, in main\r\n TokenClassificationDataset(\r\n File \"/home/IAIS/tschmude/bert_remote/examples/token-classification/utils_ner.py\", line 240, in __init__\r\n with FileLock(lock_path):\r\n File \"/home/IAIS/tschmude/anaconda3/envs/bert_env_remote/lib/python3.8/site-packages/filelock.py\", line 323, in __enter__\r\n self.acquire()\r\n File \"/home/IAIS/tschmude/anaconda3/envs/bert_env_remote/lib/python3.8/site-packages/filelock.py\", line 271, in acquire\r\n self._acquire()\r\n File \"/home/IAIS/tschmude/anaconda3/envs/bert_env_remote/lib/python3.8/site-packages/filelock.py\", line 384, in _acquire\r\n fd = os.open(self._lock_file, open_mode)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/home/tschmude/PycharmProjects/smart-sentencing/examples/token-classification/Data processing scripts/Data_Preprocessed/cached_train_BertTokenizer_180.lock'\r\n\r\n## Expected behavior\r\n\r\nI'm running `python run_ner.py Data/config.json` to train the model for custom NER recognition. I have a couple self defined labels. It has worked before, but I can't quite tell what has changed since then. I already deleted cached .lock files that I could find. \r\n",
"Would you mind providing the `config.json` as well, given that it contains your launch command? Thank you!",
"Sure, this is my config.json:\r\n\r\n`\r\n{\r\n \"data_dir\": \"/home/tschmude/PycharmProjects/smart-sentencing/examples/token-classification/Data processing scripts/Data_Preprocessed\",\r\n \"labels\": \"./Data/labels.txt\",\r\n \"model_name_or_path\": \"bert-base-german-cased\",\r\n \"output_dir\": \"./Data/Models\",\r\n \"task_type\": \"NER\",\r\n \"max_seq_length\": 180,\r\n \"num_train_epochs\": 6,\r\n \"per_device_train_batch_size\": 48,\r\n \"learning_rate\": 0.001,\r\n \"seed\": 1,\r\n \"overwrite_cache\": true,\r\n \"fp16\": true,\r\n \"do_train\": true,\r\n \"do_predict\": true,\r\n \"do_eval\": true\r\n}\r\n`",
"Issue solved... it had to do with a dumb typo in the path, sorry for the confusion!",
"No problem, glad you solved your issue!"
] | 1,607 | 1,607 | 1,607 | NONE | null | I want to train the model bert-base-german-cased on some documents, but when I try to run run_ner.py with the config.json it tells me, that it can't find the file mentioned above.
I don't quite know what's the issue here, because it worked the last time I tried. Do I have to tell the model it shouldn't use any cached files? I tried that with the overwrite_cache flag.
Does anyone have a clue what could be the problem? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8959/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8958 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8958/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8958/comments | https://api.github.com/repos/huggingface/transformers/issues/8958/events | https://github.com/huggingface/transformers/issues/8958 | 758,546,446 | MDU6SXNzdWU3NTg1NDY0NDY= | 8,958 | run_ner.py with xlm-roberta-base raises an IndexError in tokenize_and_align_labels | {
"login": "thvitt",
"id": 1906208,
"node_id": "MDQ6VXNlcjE5MDYyMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1906208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thvitt",
"html_url": "https://github.com/thvitt",
"followers_url": "https://api.github.com/users/thvitt/followers",
"following_url": "https://api.github.com/users/thvitt/following{/other_user}",
"gists_url": "https://api.github.com/users/thvitt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thvitt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thvitt/subscriptions",
"organizations_url": "https://api.github.com/users/thvitt/orgs",
"repos_url": "https://api.github.com/users/thvitt/repos",
"events_url": "https://api.github.com/users/thvitt/events{/privacy}",
"received_events_url": "https://api.github.com/users/thvitt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for flagging! Yes, using `word_ids` is probably a better idea in this case, I did that in the PR mentioned above. If you want to review it, I'd be happy to take your comments into account!"
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
`transformers` version: 4.0.0 (and the example scripts from git master aka 72d6c9c6)
- Platform: Linux-4.19.0-12-amd64-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: True (but we don’ŧ get that far)
- Using distributed or parallel set-up in script?: False
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
examples/token-classification: @stefan-it
documentation: @sgugger
-->
git blame says @sgugger
## Information
Model I am using (Bert, XLNet ...): xlm-roberta-base
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. `python3 run_ner.py --model_name_or_path xlm-roberta-base --task_name ner --dataset_name conll2003 --label_all_tokens --do_train --do_eval --output_dir finetuning-output`
Crashes with the following stacktrace:
```
Traceback (most recent call last):
File "run_ner.py", line 394, in <module>
main()
File "run_ner.py", line 292, in main
tokenized_datasets = datasets.map(
File "/home/vitt/.conda/envs/cuda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 286, in map
{
File "/home/vitt/.conda/envs/cuda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 287, in <dictcomp>
k: dataset.map(
File "/home/vitt/.conda/envs/cuda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1239, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/home/vitt/.conda/envs/cuda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1210, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "run_ner.py", line 277, in tokenize_and_align_labels
current_label = label_to_id[label[label_index]]
IndexError: list index out of range
```
From a little debugging, the problem seems to be that this code assumes there are only as many sequences with `offset[0] == 0 and offset[1] != 0` as there are words in the original input (and thus as there are labels):
https://github.com/huggingface/transformers/blob/72d6c9c68ba19b2e991b0d7a32989410399b33f5/examples/token-classification/run_ner.py#L276-L278
However, the SentencePiece tokenizer may split input words to sequences starting with a single `'▁'` token. Then, the offset mapping for '▁' will be `(0, 1)` and for the following token `(0, x)` (E.g. '.' in the CONLL data ⇒ `['▁', '.']` with offsets `[(0, 1), (0, 1)]` or ['NACCO'] ⇒ `('▁', (0, 1)), ('NAC', (0, 3)), ('CO', (3, 5))`.
(Could this use `tokenized_inputs.word_ids()` instead?) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8958/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8957 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8957/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8957/comments | https://api.github.com/repos/huggingface/transformers/issues/8957/events | https://github.com/huggingface/transformers/pull/8957 | 758,495,374 | MDExOlB1bGxSZXF1ZXN0NTMzNjU5NTY4 | 8,957 | Update README.txt | {
"login": "bino282",
"id": 17800187,
"node_id": "MDQ6VXNlcjE3ODAwMTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/17800187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bino282",
"html_url": "https://github.com/bino282",
"followers_url": "https://api.github.com/users/bino282/followers",
"following_url": "https://api.github.com/users/bino282/following{/other_user}",
"gists_url": "https://api.github.com/users/bino282/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bino282/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bino282/subscriptions",
"organizations_url": "https://api.github.com/users/bino282/orgs",
"repos_url": "https://api.github.com/users/bino282/repos",
"events_url": "https://api.github.com/users/bino282/events{/privacy}",
"received_events_url": "https://api.github.com/users/bino282/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8957/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8957",
"html_url": "https://github.com/huggingface/transformers/pull/8957",
"diff_url": "https://github.com/huggingface/transformers/pull/8957.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8957.patch",
"merged_at": 1607374910000
} |
https://api.github.com/repos/huggingface/transformers/issues/8956 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8956/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8956/comments | https://api.github.com/repos/huggingface/transformers/issues/8956/events | https://github.com/huggingface/transformers/pull/8956 | 758,138,669 | MDExOlB1bGxSZXF1ZXN0NTMzMzU3MDAz | 8,956 | Update README.txt | {
"login": "bino282",
"id": 17800187,
"node_id": "MDQ6VXNlcjE3ODAwMTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/17800187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bino282",
"html_url": "https://github.com/bino282",
"followers_url": "https://api.github.com/users/bino282/followers",
"following_url": "https://api.github.com/users/bino282/following{/other_user}",
"gists_url": "https://api.github.com/users/bino282/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bino282/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bino282/subscriptions",
"organizations_url": "https://api.github.com/users/bino282/orgs",
"repos_url": "https://api.github.com/users/bino282/repos",
"events_url": "https://api.github.com/users/bino282/events{/privacy}",
"received_events_url": "https://api.github.com/users/bino282/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Closed in favor of #8957"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8956/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8956",
"html_url": "https://github.com/huggingface/transformers/pull/8956",
"diff_url": "https://github.com/huggingface/transformers/pull/8956.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8956.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8955 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8955/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8955/comments | https://api.github.com/repos/huggingface/transformers/issues/8955/events | https://github.com/huggingface/transformers/issues/8955 | 758,137,536 | MDU6SXNzdWU3NTgxMzc1MzY= | 8,955 | shutil.Error: Destination path '/home/ubuntu/.cache/huggingface/transformers/transformers' already exists | {
"login": "parthplc",
"id": 35425925,
"node_id": "MDQ6VXNlcjM1NDI1OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/35425925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parthplc",
"html_url": "https://github.com/parthplc",
"followers_url": "https://api.github.com/users/parthplc/followers",
"following_url": "https://api.github.com/users/parthplc/following{/other_user}",
"gists_url": "https://api.github.com/users/parthplc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parthplc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parthplc/subscriptions",
"organizations_url": "https://api.github.com/users/parthplc/orgs",
"repos_url": "https://api.github.com/users/parthplc/repos",
"events_url": "https://api.github.com/users/parthplc/events{/privacy}",
"received_events_url": "https://api.github.com/users/parthplc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This was a bug added in v4, it's fixed on master so if you install from source, you should be fine.",
"I think alternatively you could also just delete the cache:\r\n\r\n```\r\nrm -rf /home/ubuntu/.cache/huggingface/transformers/transformers\r\n```\r\n\r\nbut then you'll have to re-download all models",
"yeah its worked!",
"@patrickvonplaten since this means that the cache has already been moved to `.cache/huggingface/transformers`, I think deleting the cache `.cache/torch/transformers` makes more sense, as you won't have to delete all the models you had in the initial cache, only those that were redownloaded when you went back to an older version.",
"Why do transformers throw an error ? If it exists,why not just throw a warning?I think this is a bug and should be fixed.",
"Please read the full conversation:\r\n> This was a bug added in v4, it's fixed on master so if you install from source, you should be fine."
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu 18.04
- Python version: 1.7.0
- PyTorch version (GPU?): 1.7.0
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
@sgugger
-->
## Information
Model I am using (MT5ForConditionalGeneration.):
The problem arises when using:
I am trying to run my script importing Mt5
```
In Transformers v4.0.0, the default path to cache downloaded models changed from '~/.cache/torch/transformers' to '~/.cache/huggingface/transformers'. Since you don't seem to have overridden and '~/.cache/torch/transformers' is a directory that exists, we're moving it to '~/.cache/huggingface/transformers' to avoid redownloading models you have already in the cache. You should only see this message once.
Traceback (most recent call last):
File "__main__.py", line 87, in <module>
from data_science.recommenders.content_recommender.context_similarity import Context_Similarity
File "/home/ubuntu/parth/trell-ds-framework/data_science/recommenders/content_recommender/context_similarity.py", line 5, in <module>
from sentence_transformers import SentenceTransformer
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/sentence_transformers/__init__.py", line 3, in <module>
from .datasets import SentencesDataset, SentenceLabelDataset
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/sentence_transformers/datasets.py", line 12, in <module>
from . import SentenceTransformer
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/sentence_transformers/SentenceTransformer.py", line 10, in <module>
import transformers
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/transformers/__init__.py", line 22, in <module>
from .integrations import ( # isort:skip
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/transformers/integrations.py", line 5, in <module>
from .trainer_utils import EvaluationStrategy
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/transformers/trainer_utils.py", line 25, in <module>
from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/transformers/file_utils.py", line 227, in <module>
shutil.move(old_default_cache_path, default_cache_path)
File "/usr/lib/python3.6/shutil.py", line 548, in move
raise Error("Destination path '%s' already exists" % real_dst)
shutil.Error: Destination path '/home/ubuntu/.cache/huggingface/transformers/transformers' already exists
```
## To reproduce
I am using Transformer==4.0.0 I get this error but when installing transformers==4.0.0rc1 the error doesn't show. Is there any reason for this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8955/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8955/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8954 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8954/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8954/comments | https://api.github.com/repos/huggingface/transformers/issues/8954/events | https://github.com/huggingface/transformers/issues/8954 | 758,107,061 | MDU6SXNzdWU3NTgxMDcwNjE= | 8,954 | Fine-tuning on Language Model using two tasks | {
"login": "sajastu",
"id": 10419055,
"node_id": "MDQ6VXNlcjEwNDE5MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajastu",
"html_url": "https://github.com/sajastu",
"followers_url": "https://api.github.com/users/sajastu/followers",
"following_url": "https://api.github.com/users/sajastu/following{/other_user}",
"gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajastu/subscriptions",
"organizations_url": "https://api.github.com/users/sajastu/orgs",
"repos_url": "https://api.github.com/users/sajastu/repos",
"events_url": "https://api.github.com/users/sajastu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajastu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, you're not missing anything, this part is not implemented in the examples, as fine-tuning a model using only MLM yields similar downstream results than fine-tuning a model with both tasks. \r\n\r\nHowever, we have the `BertForPreTraining` architecture which is implemented, and which can train a model using the two objectives. You would have to tweak the example scripts to manage this case, however.",
"Also, we try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!",
"> Also, we try to keep the github issues for bugs/feature requests.\r\n> Could you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n> \r\n> Thanks!\r\n\r\nSure, thank you for clarifying!"
] | 1,607 | 1,607 | 1,607 | NONE | null | Hi,
I'm reading the language modeling example on the documentation: https://huggingface.co/transformers/v2.0.0/examples.html#language-model-fine-tuning
It seems that the fine-tuning is done based on Masked Language Modelling (MLM) loss. While in the BERT paper, it appears that the LM fine-tuning is done by optimizing two tasks: 1) Masked Language Modeling, and 2) Next sentence prediction. I'm looking for the second part in Huggingface's implementation, but it seems that this part is either not implemented or I'm missing something? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8954/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8954/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8953 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8953/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8953/comments | https://api.github.com/repos/huggingface/transformers/issues/8953/events | https://github.com/huggingface/transformers/issues/8953 | 758,031,133 | MDU6SXNzdWU3NTgwMzExMzM= | 8,953 | Wrong shape output for loss of TFGPT2LMHeadModel | {
"login": "ssss1029",
"id": 7088559,
"node_id": "MDQ6VXNlcjcwODg1NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7088559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ssss1029",
"html_url": "https://github.com/ssss1029",
"followers_url": "https://api.github.com/users/ssss1029/followers",
"following_url": "https://api.github.com/users/ssss1029/following{/other_user}",
"gists_url": "https://api.github.com/users/ssss1029/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ssss1029/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ssss1029/subscriptions",
"organizations_url": "https://api.github.com/users/ssss1029/orgs",
"repos_url": "https://api.github.com/users/ssss1029/repos",
"events_url": "https://api.github.com/users/ssss1029/events{/privacy}",
"received_events_url": "https://api.github.com/users/ssss1029/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello !\r\n\r\nThis is an error in the documentation. TF doesn't apply a mean across all the values so you basically get a loss of shape 1023 (sequence length - 1 because of the right shift). Thanks for having spotted this!",
"This issue has been stale for 1 month."
] | 1,607 | 1,618 | 1,618 | NONE | null | ## Environment info
- `transformers` version: 4.0.0
- Platform: Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): 2.3
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
@jplu
## Information
Model I am using (Bert, XLNet ...): TFGPT2LMHeadModel
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
The following in a Python interpreter:
```python
import transformers
model = transformers.models.gpt2.TFGPT2LMHeadModel.from_pretrained('gpt2')
input_ids = tf.ones((1, 1024), dtype=tf.int32)
labels = tf.ones((1, 1024), dtype=tf.int32)
print(model(input_ids, labels=labels, return_dict=True, training=True).loss.shape)
```
Outputs
```
TensorShape([1023])
```
It seems the loss output is dependent on batch size:
```python
labels = tf.ones((2, 1024), dtype=tf.int32)
input_ids = tf.ones((2, 1024), dtype=tf.int32)
print(model(input_ids, labels=labels, return_dict=True, training=True).loss.shape)
```
Outputs
```
TensorShape([2046])
```
## Expected behavior
According to the docs (https://huggingface.co/transformers/model_doc/gpt2.html#tfgpt2lmheadmodel), the loss is of shape `(1,)`. However, this is not the shape that is returned. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8953/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8952 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8952/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8952/comments | https://api.github.com/repos/huggingface/transformers/issues/8952/events | https://github.com/huggingface/transformers/issues/8952 | 758,024,436 | MDU6SXNzdWU3NTgwMjQ0MzY= | 8,952 | batch_sampler with trainer.py would not set the epoch | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi there.\r\nTo use custom sampling (either through a sampler or a batch sampler), users are expected to subsclass `Trainer` and override the `get_train_dataloader`/`get_eval_dataloader` methods to suit their needs.\r\n\r\nNote that those changes might then not be compatible with distributed training/TPU training.",
"Hi there\nyes, thats correct, but looking into train() method of trainer.py class,\nthe user needs to overwrite the whole train() function for such cases, and\nthis is just for setting the epoch for other type of sampler, it would be\nvery nice if the train() method allowed custom sampler. thanks.\n\nOn Sun, Dec 6, 2020 at 11:56 PM Sylvain Gugger <[email protected]>\nwrote:\n\n> Hi there.\n> To use custom sampling (either through a sampler or a batch sampler),\n> users are expected to subsclass Trainer and override the\n> get_train_dataloader/get_eval_dataloader methods to suit their needs.\n>\n> Note that those changes might then not be compatible with distributed\n> training/TPU training.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8952#issuecomment-739579294>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGAIHTYBBBJI75TGKLSTQDYXANCNFSM4UPTHQNQ>\n> .\n>\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"This seems to work for setting the epoch for the train_dataloader, without overwriting the train() function:\r\n\r\nclass SetEpochCallback(TrainerCallback):\r\n def on_epoch_begin(self, args, state, control, **kwargs):\r\n kwargs['train_dataloader'].batch_sampler.set_epoch(int(state.epoch))"
] | 1,607 | 1,701 | 1,614 | NONE | null | Dear Huggingface team
If one uses batch_sampler instead of sampler then in trainer.py the part you set_epoch for the dataloader sampler it would not work, also in case one define custom sampler in certain application, like what I do, again in trainer.py it would not be called.
I was wondering if the codes can be more general to include these cases.
thanks.
Best
Rabeeh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8952/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8951 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8951/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8951/comments | https://api.github.com/repos/huggingface/transformers/issues/8951/events | https://github.com/huggingface/transformers/issues/8951 | 758,006,089 | MDU6SXNzdWU3NTgwMDYwODk= | 8,951 | vocab_file and merges_file still required params for loading serialized tokenizers | {
"login": "minimaxir",
"id": 2179708,
"node_id": "MDQ6VXNlcjIxNzk3MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2179708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minimaxir",
"html_url": "https://github.com/minimaxir",
"followers_url": "https://api.github.com/users/minimaxir/followers",
"following_url": "https://api.github.com/users/minimaxir/following{/other_user}",
"gists_url": "https://api.github.com/users/minimaxir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minimaxir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minimaxir/subscriptions",
"organizations_url": "https://api.github.com/users/minimaxir/orgs",
"repos_url": "https://api.github.com/users/minimaxir/repos",
"events_url": "https://api.github.com/users/minimaxir/events{/privacy}",
"received_events_url": "https://api.github.com/users/minimaxir/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Indeed, we could do that, and then add a check below to ensure that we get a correct error message. Do you want to open a PR with the fix?",
"I wasn't planning on doing a PR because I wasn't sure of the scope of changes needed (e.g. every tokenizer `__init__` would need to be changed) and it also seems like there isn't any documentation for serialized tokenizers at all in `transformers`, so I assumed you were getting to that.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | e.g.
https://github.com/huggingface/transformers/blob/dd52804f5fce0a568ffbb3dc7fd088d2de0a0e56/src/transformers/models/gpt2/tokenization_gpt2_fast.py#L122-L132
Both of these should probably be optional params now. Setting `vocab_file` and `merges_file` to `None` while specifying a `tokenizer_file` works, but seems messy. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8951/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8950 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8950/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8950/comments | https://api.github.com/repos/huggingface/transformers/issues/8950/events | https://github.com/huggingface/transformers/pull/8950 | 758,000,726 | MDExOlB1bGxSZXF1ZXN0NTMzMjUxMDM0 | 8,950 | Fix Code quality issues | {
"login": "withshubh",
"id": 25361949,
"node_id": "MDQ6VXNlcjI1MzYxOTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/25361949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/withshubh",
"html_url": "https://github.com/withshubh",
"followers_url": "https://api.github.com/users/withshubh/followers",
"following_url": "https://api.github.com/users/withshubh/following{/other_user}",
"gists_url": "https://api.github.com/users/withshubh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/withshubh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/withshubh/subscriptions",
"organizations_url": "https://api.github.com/users/withshubh/orgs",
"repos_url": "https://api.github.com/users/withshubh/repos",
"events_url": "https://api.github.com/users/withshubh/events{/privacy}",
"received_events_url": "https://api.github.com/users/withshubh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Rather than fixing code issues, this looks like an opinionated take on the code structure. We like using list/dict comprehensions as we find them explicit, what doesn't `DeepSource` like about these?\r\n\r\nWe already have code quality setup, with `black`, `isort`, `flake8` and our own tools for code maintainability. I think this is enough already, and don't see the advantage of adding another code quality tool.\r\n\r\nWhy should we add DeepSource to our code quality stack?",
"Hi @LysandreJik :wave: \r\n\r\n> Hello! Rather than fixing code issues, this looks like an opinionated take on the code structure. We like using list/dict comprehensions as we find them explicit, what doesn't `DeepSource` like about these?\r\n\r\nDeepSource suggests these because using the other can give a minor performance boost:\r\nExample:\r\n```\r\nIn [3]: timeit.timeit(stmt=\"{num: square for num, square in zip(first_hundred_nums, first_hundred_squares)}\", globals=globals()) \r\nOut[3]: 5.606797965000624\r\n\r\nIn [4]: timeit.timeit(stmt=\"dict(zip(first_hundred_nums, first_hundred_squares))\", globals=globals()) \r\nOut[4]: 4.588974316000531\r\n```\r\nAlso, the inbuilt functions `all()` and `any()` in python also support short-circuiting (evaluation stops as soon as the overall return value of the function is known), but this behavior is lost if you use comprehension.\r\n\r\n> We already have code quality setup, with `black`, `isort`, `flake8` and our own tools for code maintainability. I think this is enough already, and don't see the advantage of adding another code quality tool.\r\n> Why should we add DeepSource to our code quality stack?\r\n\r\n- DeepSource internally runs black/isort/flake8 checks too. In addition to that: if your codebase is strictly following the conventions from these tools, instead of failing the checks and depending on the contributors to fix it, DeepSource Transformers can do this for you (commit to the same PR with the fixes). Also, you won't need to look after the version upgrades for these tools.\r\n- Fix the issues. DeepSource can automatically fix some of the issues it detects (also includes some flake8 issues) with just a click. Read more about this [here](https://deepsource.io/blog/code-formatting-on-autopilot/).\r\n- DeepSource's own code quality checks.\r\n- Option to analyze only modified code: DeepSource will show you only newly introduced code quality issues for a changeset. Read more about it [here](https://deepsource.io/blog/release-granular-diffs/).",
"Hi @LysandreJik :wave: \r\n\r\nPlease have a look :eyes: ",
"Hey! @LysandreJik @mfuntowicz :wave: \r\n\r\nPlease have a look at this! :eyes: ",
"I vote to stay with our current tooling for now.\r\n\r\nWe'll follow your work at DeepSource and could reconsider it in a few month, ie. the summer but no need to ping us more for now.\r\n\r\nThanks."
] | 1,607 | 1,615 | 1,615 | NONE | null | This pull request fixes some of the code quality issues raised by DeepSource on my fork of this repository.
I have already fixed some issues using DeepSource's Autofix.
Take a quick look at all the issues caught by DeepSource for this repository [here](https://deepsource.io/gh/withshubh/transformers/issues/?category=recommended).
### Summary of fixes
- Remove unnecessary use of comprehension
- Remove unused imports
- Use literal syntax instead of function calls to create data structure
- Remove unnecessary generator
- Remove unnecessary `return` statement
You can also have a look at the [configuration file](https://github.com/withshubh/transformers/blob/deepsource/.deepsource.toml) I used for DeepSource Analysis.
### Using DeepSource to continuously analyze your repository
- Merge this PR. I have included a `.deepsource.toml` in this PR, which you can use to configure your analysis settings.
- Install DeepSource on your repository [here](https://deepsource.io/signup).
- Activate analysis [here](https://deepsource.io/gh/huggingface/transformers/).
Feel free to merge this PR if you wish to fix the issues.✨ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8950/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8950",
"html_url": "https://github.com/huggingface/transformers/pull/8950",
"diff_url": "https://github.com/huggingface/transformers/pull/8950.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8950.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8949 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8949/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8949/comments | https://api.github.com/repos/huggingface/transformers/issues/8949/events | https://github.com/huggingface/transformers/pull/8949 | 757,961,050 | MDExOlB1bGxSZXF1ZXN0NTMzMjIyMTc1 | 8,949 | Adds flashcards to Glossary & makes small corrections | {
"login": "darigovresearch",
"id": 30328618,
"node_id": "MDQ6VXNlcjMwMzI4NjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/30328618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/darigovresearch",
"html_url": "https://github.com/darigovresearch",
"followers_url": "https://api.github.com/users/darigovresearch/followers",
"following_url": "https://api.github.com/users/darigovresearch/following{/other_user}",
"gists_url": "https://api.github.com/users/darigovresearch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/darigovresearch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/darigovresearch/subscriptions",
"organizations_url": "https://api.github.com/users/darigovresearch/orgs",
"repos_url": "https://api.github.com/users/darigovresearch/repos",
"events_url": "https://api.github.com/users/darigovresearch/events{/privacy}",
"received_events_url": "https://api.github.com/users/darigovresearch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks very much for the feedback, the relevant updates have been made as requested in a1295b7!",
"@sgugger Is there anything else that is required for this pull request to be merged?",
"@sgugger is there anything else that needs to be done on our side for this pull request to be merged?",
"Hi @darigovresearch, sorry I missed your previous ping. We have just added a [community page](https://huggingface.co/transformers/master/community.html) in the documentation and we would actually prefer to put the links to your flashcards there if that's okay.\r\n\r\nSorry again for the delay!",
"Hi @sgugger, no worries the updates have been made to the community.md file, the glossary.rst file still contains the corrections but has removed reference to the flashcards.\r\n\r\nIs there anything else you need for this to be merged?",
"Nope, that's perfect! Thanks a lot for your patience.",
"@sgugger no worries & thanks for merging it!\r\n\r\nWhen checking the page it appears that the rendering works in the .md file but not the final page - https://huggingface.co/transformers/master/community.html\r\n\r\nNot sure what it could be, any thoughts?\r\n\r\nPotentially add an extra blank line after the heading?\r\n\r\n\r\n",
"Yes I just tested locally and it was the new line missing. I added it in [this commit](https://github.com/huggingface/transformers/commit/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2) (directly on master).",
"Great, thanks for the heads up and for the help!"
] | 1,607 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Flashcards have been made using the glossary as a starting point and are now linked at the start of the glossary. Other small corrections & standardisations have also been made for consistency.
This pull requests follows from the discussion in issue #8932
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8949/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8949",
"html_url": "https://github.com/huggingface/transformers/pull/8949",
"diff_url": "https://github.com/huggingface/transformers/pull/8949.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8949.patch",
"merged_at": 1611167321000
} |
https://api.github.com/repos/huggingface/transformers/issues/8948 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8948/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8948/comments | https://api.github.com/repos/huggingface/transformers/issues/8948/events | https://github.com/huggingface/transformers/pull/8948 | 757,955,567 | MDExOlB1bGxSZXF1ZXN0NTMzMjE4MjY0 | 8,948 | Add model card | {
"login": "sarnikowski",
"id": 52626521,
"node_id": "MDQ6VXNlcjUyNjI2NTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/52626521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarnikowski",
"html_url": "https://github.com/sarnikowski",
"followers_url": "https://api.github.com/users/sarnikowski/followers",
"following_url": "https://api.github.com/users/sarnikowski/following{/other_user}",
"gists_url": "https://api.github.com/users/sarnikowski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sarnikowski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarnikowski/subscriptions",
"organizations_url": "https://api.github.com/users/sarnikowski/orgs",
"repos_url": "https://api.github.com/users/sarnikowski/repos",
"events_url": "https://api.github.com/users/sarnikowski/events{/privacy}",
"received_events_url": "https://api.github.com/users/sarnikowski/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
Adds a model card.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8948/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8948",
"html_url": "https://github.com/huggingface/transformers/pull/8948",
"diff_url": "https://github.com/huggingface/transformers/pull/8948.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8948.patch",
"merged_at": 1607271393000
} |
https://api.github.com/repos/huggingface/transformers/issues/8947 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8947/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8947/comments | https://api.github.com/repos/huggingface/transformers/issues/8947/events | https://github.com/huggingface/transformers/pull/8947 | 757,947,499 | MDExOlB1bGxSZXF1ZXN0NTMzMjEyMTg4 | 8,947 | Fix QA pipeline on Windows | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
As reported on the [forum](https://discuss.huggingface.co/t/pipeline-example-in-the-doc-throws-an-error-question-answering/2632), there is a problem in the current pipeline on Windows. The problem's root is that numpy int arrays have a different default on Linux and Windows, the current snippet:
```
import numpy as np
x = np.array([1, 2, 3])
x.dtype
```
will print `dtype('int64')` on Linux/MacOS but `dtype('int32')` on Windows. So this means that just doing `torch.tensor(some_numpy_array)` may result in a tensor of dtype `int32` which PyTorch does not like. For future reference, the error:
```
Expected tensor for argument #1 'xxx' to have scalar type Long; but got torch.IntTensor instead
```
is usually a clear indicator of this behavior happening.
The PR fixes the QA pipeline by casting the tensors to long if they have the int type. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8947/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8947",
"html_url": "https://github.com/huggingface/transformers/pull/8947",
"diff_url": "https://github.com/huggingface/transformers/pull/8947.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8947.patch",
"merged_at": 1607352633000
} |
https://api.github.com/repos/huggingface/transformers/issues/8946 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8946/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8946/comments | https://api.github.com/repos/huggingface/transformers/issues/8946/events | https://github.com/huggingface/transformers/issues/8946 | 757,930,419 | MDU6SXNzdWU3NTc5MzA0MTk= | 8,946 | Error during validation Trainer step | {
"login": "Javier-Jimenez99",
"id": 38747614,
"node_id": "MDQ6VXNlcjM4NzQ3NjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/38747614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Javier-Jimenez99",
"html_url": "https://github.com/Javier-Jimenez99",
"followers_url": "https://api.github.com/users/Javier-Jimenez99/followers",
"following_url": "https://api.github.com/users/Javier-Jimenez99/following{/other_user}",
"gists_url": "https://api.github.com/users/Javier-Jimenez99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Javier-Jimenez99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Javier-Jimenez99/subscriptions",
"organizations_url": "https://api.github.com/users/Javier-Jimenez99/orgs",
"repos_url": "https://api.github.com/users/Javier-Jimenez99/repos",
"events_url": "https://api.github.com/users/Javier-Jimenez99/events{/privacy}",
"received_events_url": "https://api.github.com/users/Javier-Jimenez99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there! The code is incomplete as we have no idea of what your dataset and model is. From the error message it looks like the problem is in the logits, so we would need the model to be able to reproduce the error.",
"Here is the full code:\r\n\r\n```\r\nimport torch \r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification,Trainer, TrainingArguments\r\nimport json\r\nfrom torch.utils.data import Dataset, DataLoader\r\nimport pandas as pd\r\nfrom transformers.trainer_callback import EarlyStoppingCallback\r\n\r\nclass dataset(Dataset):\r\n def __init__(self,data,labels,tokenizer):\r\n self.data = data\r\n self.labels = labels\r\n self.tokenizer= tokenizer\r\n\r\n def processText(self,text):\r\n return self.tokenizer(text, truncation=True)\r\n\r\n def __len__(self):\r\n return len(self.data.index)\r\n\r\n def __getitem__(self,i):\r\n row = self.data.iloc[i]\r\n x = self.processText(self.data.iloc[i]['x']).data\r\n\r\n try:\r\n y = self.labels.index(self.data.iloc[i]['y'])\r\n except:\r\n y = len(self.labels) - 1 \r\n\r\n x['label'] = y\r\n return x\r\n\r\ndef getLabels(data,nLabels):\r\n serie = data.pivot_table(index=['y'], aggfunc='size')\r\n\r\n labelsList = serie.sort_values(ascending=False).index.values.tolist() \r\n\r\n return labelsList[0:nLabels-1] + [\"OTHER\"]\r\n\r\ndef accuracy(evalPrediction):\r\n yPred = evalPrediction.predictions\r\n yTrue = evalPrediction.label_ids\r\n\r\n return {'accuracy':(yPred == yTrue).mean()}\r\n\r\ndf = pd.read_csv(\"/content/drive/MyDrive/SNOMED/Biopsias_HUPM_2010-2018_mor_codes-v1.csv\",low_memory=False)\r\ndf = df[[\"Diagnostico\", \"CodOrgano\"]]\r\n\r\ndata = df.rename(columns = {'Diagnostico':'x','CodOrgano':'y'})\r\ndata = data.dropna().reset_index(drop=True)\r\n\r\n#df = df.iloc[:1000,:]\r\n\r\nindex = df.index\r\nN = len(index)\r\nP = 0.7\r\nlimit = round(N*P)\r\n\r\ntrainData = data.iloc[:limit,:]\r\nvalidationData = data.iloc[limit:,:]\r\n\r\nnLabels = 51\r\n\r\nlabels = getLabels(data,nLabels)\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained('dccuchile/bert-base-spanish-wwm-uncased',num_labels = nLabels)\r\ntokenizer = AutoTokenizer.from_pretrained('dccuchile/bert-base-spanish-wwm-uncased',model_max_length = 128, use_fast=True)\r\n\r\ntrainDataset = dataset(trainData,labels,tokenizer)\r\nvalidationDataset = dataset(validationData,labels,tokenizer)\r\n\r\nargs = TrainingArguments(\"/content/drive/MyDrive/SNOMED/TrainingLog\",\r\n learning_rate = 0.0003,\r\n num_train_epochs = 10,\r\n per_device_train_batch_size = 32,\r\n per_device_eval_batch_size = 32,\r\n evaluation_strategy = \"epoch\",\r\n label_names = labels,\r\n disable_tqdm = False,\r\n dataloader_num_workers = 6,\r\n load_best_model_at_end = True,\r\n metric_for_best_model = \"accuracy\",\r\n greater_is_better = True)\r\n\r\nprint(\"\\nDEVICE:\",args.device)\r\n\r\ncallbacks = [EarlyStoppingCallback(2,0.8)]\r\n\r\ntrainer = Trainer(model,\r\n args = args,\r\n train_dataset = trainDataset, \r\n eval_dataset = validationDataset,\r\n tokenizer = tokenizer, \r\n callbacks = callbacks,\r\n compute_metrics = accuracy)\r\n\r\ntrainer.train()\r\n```\r\n\r\nHere is the notebook where it can be checked easily: [https://colab.research.google.com/drive/1VCacM-CDl2xrIFfwsrkmEh-D0IswK61D?usp=sharing](url)\r\n\r\nI'm not sure but, do the model need ```return_dict = True```?",
"One thing that may be linked to this is the `label_names = labels` in your training arguments. `label_names` is the name(s) of the field containing your labels. In this case, the default (which is `[\"labels\"]`) is what you want, so you should leave it as is.",
"I changed my dataset to save the label on \"labels\" and it worked. It was a really silly problem, thank you so much!!",
"The same silly problem happens on me, thx a lot!!!!!!!!!!!!!😵💫"
] | 1,607 | 1,648 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.0.dev0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes (Dataloaders)
@sgugger
## Information
I'm using BERT for sequence classification. I have build my own pytorch dataset, with my data. During training there is no problem, but when it starts evaluation it gives an error with the following message:
```
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in train(self, model_path, trial)
801
802 self.control = self.callback_handler.on_epoch_end(self.args, self.state, self.control)
--> 803 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
804
805 if self.args.tpu_metrics_debug or self.args.debug:
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch)
863 metrics = None
864 if self.control.should_evaluate:
--> 865 metrics = self.evaluate()
866 self._report_to_hp_search(trial, epoch, metrics)
867
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in evaluate(self, eval_dataset, ignore_keys)
1278 # self.args.prediction_loss_only
1279 prediction_loss_only=True if self.compute_metrics is None else None,
-> 1280 ignore_keys=ignore_keys,
1281 )
1282
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in prediction_loop(self, dataloader, description, prediction_loss_only, ignore_keys)
1387 losses_host = losses if losses_host is None else torch.cat((losses_host, losses), dim=0)
1388 if logits is not None:
-> 1389 preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100)
1390 if labels is not None:
1391 labels_host = labels if labels_host is None else nested_concat(labels_host, labels, padding_index=-100)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in nested_concat(tensors, new_tensors, padding_index)
82 ), f"Expected `tensors` and `new_tensors` to have the same type but found {type(tensors)} and {type(new_tensors)}."
83 if isinstance(tensors, (list, tuple)):
---> 84 return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
85 elif isinstance(tensors, torch.Tensor):
86 return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in <genexpr>(.0)
82 ), f"Expected `tensors` and `new_tensors` to have the same type but found {type(tensors)} and {type(new_tensors)}."
83 if isinstance(tensors, (list, tuple)):
---> 84 return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
85 elif isinstance(tensors, torch.Tensor):
86 return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in nested_concat(tensors, new_tensors, padding_index)
84 return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
85 elif isinstance(tensors, torch.Tensor):
---> 86 return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
87 elif isinstance(tensors, np.ndarray):
88 return numpy_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in torch_pad_and_concatenate(tensor1, tensor2, padding_index)
45 def torch_pad_and_concatenate(tensor1, tensor2, padding_index=-100):
46 """Concatenates `tensor1` and `tensor2` on first axis, applying padding on the second if necessary."""
---> 47 if len(tensor1.shape) == 1 or tensor1.shape[1] == tensor2.shape[1]:
48 return torch.cat((tensor1, tensor2), dim=0)
49
IndexError: tuple index out of range
```
## To reproduce
Here is the code I used:
```
args = TrainingArguments("/content/drive/MyDrive/SNOMED/TrainingLog",
learning_rate = 0.0003,
num_train_epochs = 10,
per_device_train_batch_size = 32,
per_device_eval_batch_size = 32,
evaluation_strategy = "epoch",
label_names = labels,
disable_tqdm = False,
dataloader_num_workers = 6,
load_best_model_at_end = True,
metric_for_best_model = "accuracy",
greater_is_better = True)
print("\nDEVICE:",args.device)
callbacks = [EarlyStoppingCallback(2,0.8)]
trainer = Trainer(model,
args = args,
train_dataset = trainDataset,
eval_dataset = validationDataset,
tokenizer = tokenizer,
callbacks = callbacks,
compute_metrics = accuracy)
trainer.train()
```
Both datasets have the same structure. Each item has the ```BatchEncoding.data``` dict, with a field 'label' added.
## Expected behavior
It should do the evaluation step correctly.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8946/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8945 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8945/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8945/comments | https://api.github.com/repos/huggingface/transformers/issues/8945/events | https://github.com/huggingface/transformers/issues/8945 | 757,901,013 | MDU6SXNzdWU3NTc5MDEwMTM= | 8,945 | Sparse Transormer | {
"login": "turian",
"id": 65918,
"node_id": "MDQ6VXNlcjY1OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/65918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/turian",
"html_url": "https://github.com/turian",
"followers_url": "https://api.github.com/users/turian/followers",
"following_url": "https://api.github.com/users/turian/following{/other_user}",
"gists_url": "https://api.github.com/users/turian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/turian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/turian/subscriptions",
"organizations_url": "https://api.github.com/users/turian/orgs",
"repos_url": "https://api.github.com/users/turian/repos",
"events_url": "https://api.github.com/users/turian/events{/privacy}",
"received_events_url": "https://api.github.com/users/turian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Hi there,\nHappy to consult on anything. The sparse attention kernels included above\nare very fast, but require building blocksparse -- not sure if this will\nwork for you all.\nRewon\n\nOn Sun, Dec 6, 2020 at 3:10 AM Joseph Turian <[email protected]>\nwrote:\n\n> 🌟 New model addition Model description\n>\n> Sparse Transformers (https://openai.com/blog/sparse-transformer/) are one\n> of the two most efficient transformers for long range problems, according\n> to Google's Long Arena paper: https://arxiv.org/pdf/2011.04006.pdf (Big\n> Bird) is the other one.\n>\n> The original Sparse Transformers work shows great results on text, images,\n> and audio. Further OpenAI work Jukebox (https://openai.com/blog/jukebox/)\n> uses Sparse Transformers to generate incredibly long raw music audio with\n> style transfer. Lastly\n> https://proceedings.icml.cc/static/paper_files/icml/2020/6095-Paper.pdf\n> uses Sparse Transformers to achieve state-of-the-art CIFAR performance.\n> Open source status\n>\n> - the model implementation is available:\n>\n> latest version, for CIFAR:\n> https://github.com/openai/distribution_augmentation\n> original, but not maintained: https://github.com/openai/sparse_attention\n> Alternate implementation from FAIR:\n> https://github.com/pytorch/fairseq/blob/master/fairseq/modules/sparse_multihead_attention.py\n>\n> - the model weights are available:\n>\n> https://github.com/openai/distribution_augmentation (CIFAR work) has\n> model weights available, as described in the README:\n> https://openaipublic.blob.core.windows.net/distribution-augmentation-assets/models/c10-15m-baseline.npz\n>\n> Jukebox is open-source and has model weights, but is a larger pipeline\n> that includes VQ-VAEs so it may not be of interest for a transformers-only\n> library.\n>\n> - who are the authors: @rewonc <https://github.com/rewonc> @myleott\n> <https://github.com/myleott> @cclauss <https://github.com/cclauss>\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8945>, or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAYEDVCS3HWQ3EIPG4I77WTSTNRDNANCNFSM4UPHW52Q>\n> .\n>\n",
"cc'ing @madlag for info"
] | 1,607 | 1,607 | null | NONE | null | # 🌟 New model addition
## Model description
Sparse Transformers (https://openai.com/blog/sparse-transformer/) are one of the two most efficient transformers for long range problems, according to Google's Long Arena paper: https://arxiv.org/pdf/2011.04006.pdf (Big Bird) is the other one.
The original Sparse Transformers work shows great results on text, images, and audio. Further OpenAI work Jukebox (https://openai.com/blog/jukebox/) uses Sparse Transformers to generate incredibly long raw music audio with style transfer. Lastly https://proceedings.icml.cc/static/paper_files/icml/2020/6095-Paper.pdf uses Sparse Transformers to achieve state-of-the-art CIFAR performance.
## Open source status
* [x] the model implementation is available:
latest version, for CIFAR: https://github.com/openai/distribution_augmentation
original, but not maintained: https://github.com/openai/sparse_attention
Alternate implementation from FAIR: https://github.com/pytorch/fairseq/blob/master/fairseq/modules/sparse_multihead_attention.py
* [x] the model weights are available:
https://github.com/openai/distribution_augmentation (CIFAR work) has model weights available, as described in the README: https://openaipublic.blob.core.windows.net/distribution-augmentation-assets/models/c10-15m-baseline.npz
Jukebox is open-source and has model weights, but is a larger pipeline that includes VQ-VAEs so it may not be of interest for a transformers-only library.
* [x] who are the authors: @rewonc @myleott @cclauss
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8945/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8945/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8944 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8944/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8944/comments | https://api.github.com/repos/huggingface/transformers/issues/8944/events | https://github.com/huggingface/transformers/issues/8944 | 757,898,945 | MDU6SXNzdWU3NTc4OTg5NDU= | 8,944 | how to use EncoderDecoderModel to do en-de translation? | {
"login": "CharizardAcademy",
"id": 20318555,
"node_id": "MDQ6VXNlcjIwMzE4NTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/20318555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CharizardAcademy",
"html_url": "https://github.com/CharizardAcademy",
"followers_url": "https://api.github.com/users/CharizardAcademy/followers",
"following_url": "https://api.github.com/users/CharizardAcademy/following{/other_user}",
"gists_url": "https://api.github.com/users/CharizardAcademy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CharizardAcademy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CharizardAcademy/subscriptions",
"organizations_url": "https://api.github.com/users/CharizardAcademy/orgs",
"repos_url": "https://api.github.com/users/CharizardAcademy/repos",
"events_url": "https://api.github.com/users/CharizardAcademy/events{/privacy}",
"received_events_url": "https://api.github.com/users/CharizardAcademy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!\r\n\r\ncc @patrickvonplaten who might have an idea.",
"This blog post should also help on how to fine-tune a warm-started Encoder-Decoder model: https://huggingface.co/blog/warm-starting-encoder-decoder . But as @LysandreJik said the forum is the better place to ask.",
"@patrickvonplaten the blog post mentions about a notebook link for machine translation task but on clicking, it redirects to the blog only. I think there might be some mistake while adding the notebook link. Can you please share the translation task notebook on WMT dataset?",
"Hey @zmf0507 - yeah I sadly haven't found the time yet to do this notebook",
"@patrickvonplaten please let me know here when you make one. Despite being so popular, hugging-face doesn't provide any tutorial/notebook for machine translation. I think a lot of people might be looking for similar resources. Will help much. Thanks",
"We have now one for mBart: https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb -> will try to make one for Encoder Decoder as well when I find time :-) ",
"sure. thanks a lot :)",
"@patrickvonplaten is there any encoder-decoder notebook made for translation task ? thanks ",
"I'm sadly not finding the time to do so at the moment :-/ \r\n\r\nI'll put this up as a \"Good First Issue\" now in case someone from the community finds time to make such a notebook.\r\n\r\nA notebook for EncoderDecoderModel translation should look very similar to this notebook: https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Leveraging_Pre_trained_Checkpoints_for_Encoder_Decoder_Models.ipynb - one only has to change the summarization dataset with a translation dataset",
"@patrickvonplaten thanks for the update.\r\nCan you tell if there is any work on keyphrase generation /keywords generation (seq2seq task) using hugging-face ? I am looking for such tutorials and examples where I can try and play around keyphrase generation. This task is not mentioned on hugging-face notebooks page as well.\r\nPlease let me know",
"My best advice would be to ask this question on the [forum](https://discuss.huggingface.co/) - I sadly don't know of any work related to this",
"@patrickvonplaten : Here's my [attempt](https://gist.github.com/parambharat/6870f3a32537f5febac70f7fd876e90c) that modifies the condensed version of [BERT2BERT.ipynb](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing) to use the wmt dataset, BLEU4 score for the en-de translation task. ",
"> We have now one for mBart: https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb -> will try to make one for Encoder-Decoder as well when I find time :-)\r\n\r\nInferring the model training details from BERT2BERT for CNN daily mail is not sufficient, we experimented with an MT model with the must-c data for en-fr , however the prediction were almost random and it was not able to understand the core meaning of its input sequence.",
"If anyone has a complete notebook based on the Encoder-Decoder model for MT, please share. Thank you.",
"Has anyone performed the translation task correctly using bert2bert ? TAT",
"@xueqianyi - maybe you have more luck on https://discuss.huggingface.co/ ? ",
"Just an extra comment here: With bert2bert, it's not very helpful for MT, as BERT is only trained on English data.",
"Hi there, I'm a Data Science grad student at Luddy. I was looking to contribute to open source in my free time and came across this issue. I did put a rough notebook together, linking it [here](https://colab.research.google.com/drive/1uaXsyu3S7LizulA3m6Fp__F9Fxu5AU97?usp=sharing) @xueqianyi @CharizardAcademy. I would love to polish it to the standard upheld in the HF community if its indeed helpful. \r\n\r\nJust some comments (I did NOT spend a lot of time on this, so your observations MIGHT differ):\r\n\r\n1) The translation quality depends a lot on model capacity, though even using base BERT, the translations are fairly decent and definitely not gibberish. Tweaking the decoding parameters will help too. \r\n\r\n2) I've trained only on 1M examples due to compute constraints, but I believe some multiples higher might work out better. I trained with 0.1M and 0.5M examples, I saw consistent improvements to the BLEU score on every increase. \r\n\r\n3) Length of the tensors fed into the model (post-tokenization) have an impact on the translation quality too. Specifically max_length=64 and higher results in a lot of repetitions especially for short sentences because this particular dataset (1M subset) has most examples below 32 tokens (95%) (hence I recommend spending sometime tweaking the decoding parameters, no_repeat_ngram_size, max_length, length_penality etc in particular). \r\n\r\n4) Also, the model seems to think President Obama and President Bush are the same person, EVERYTIME. xD ",
"I would like to work on this issue"
] | 1,607 | 1,697 | null | NONE | null | I have trained a EncoderDecoderModel from huggging face to do english-German translation task. I tried to overfit a small dataset (100 parallel sentences), and use `model.generate()` then `tokenizer.decode()` to perform the translation. However, the output seems to be proper German sentences, but it is definitely not the correct translation.
Here are the code for building the model
```
encoder_config = BertConfig()
decoder_config = BertConfig()
config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)
model = EncoderDecoderModel(config=config)
```
Here are the code for testing the model
```
model.eval()
input_ids = torch.tensor(tokenizer.encode(input_text)).unsqueeze(0)
output_ids = model.generate(input_ids.to('cuda'), decoder_start_token_id=model.config.decoder.pad_token_id)
output_text = tokenizer.decode(output_ids[0])
```
Example input: "iron cement is a ready for use paste which is laid as a fillet by putty knife or finger in the mould edges ( corners ) of the steel ingot mould ."
Ground truth translation: "iron cement ist eine gebrauchs ##AT##-##AT## fertige Paste , die mit einem Spachtel oder den Fingern als Hohlkehle in die Formecken ( Winkel ) der Stahlguss -Kokille aufgetragen wird ."
What the model outputs after trained 100 epochs: "[S] wenn sie den unten stehenden link anklicken, sehen sie ein video uber die erstellung ansprechender illustrationen in quarkxpress" which is totally nonesense.
Where is the problem? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8944/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8943 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8943/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8943/comments | https://api.github.com/repos/huggingface/transformers/issues/8943/events | https://github.com/huggingface/transformers/issues/8943 | 757,882,155 | MDU6SXNzdWU3NTc4ODIxNTU= | 8,943 | Why BertSelfAttention reshape Q,K,V from 3-D tensor to 4-D tensor | {
"login": "daydayfun",
"id": 39835967,
"node_id": "MDQ6VXNlcjM5ODM1OTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/39835967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daydayfun",
"html_url": "https://github.com/daydayfun",
"followers_url": "https://api.github.com/users/daydayfun/followers",
"following_url": "https://api.github.com/users/daydayfun/following{/other_user}",
"gists_url": "https://api.github.com/users/daydayfun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daydayfun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daydayfun/subscriptions",
"organizations_url": "https://api.github.com/users/daydayfun/orgs",
"repos_url": "https://api.github.com/users/daydayfun/repos",
"events_url": "https://api.github.com/users/daydayfun/events{/privacy}",
"received_events_url": "https://api.github.com/users/daydayfun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,607 | 1,607 | 1,607 | NONE | null | # 🌟 New model addition
## Model description
https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py
def transpose_for_scores(self, x):
new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
x = x.view(*new_x_shape)
return x.permute(0, 2, 1, 3)
query_layer = self.transpose_for_scores(mixed_query_layer)
key_layer = self.transpose_for_scores(mixed_key_layer)
value_layer = self.transpose_for_scores(mixed_value_layer)
## Open source status
Question:
1. Why we must transpose Q,K,V from 3-D tensor to 4-D tensor ?
2. What if we just use 3-D Q,K,V to do torch.matmul ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8943/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8942 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8942/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8942/comments | https://api.github.com/repos/huggingface/transformers/issues/8942/events | https://github.com/huggingface/transformers/issues/8942 | 757,878,311 | MDU6SXNzdWU3NTc4NzgzMTE= | 8,942 | NER Pipeline Issue | {
"login": "albertnanda",
"id": 20819507,
"node_id": "MDQ6VXNlcjIwODE5NTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/20819507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertnanda",
"html_url": "https://github.com/albertnanda",
"followers_url": "https://api.github.com/users/albertnanda/followers",
"following_url": "https://api.github.com/users/albertnanda/following{/other_user}",
"gists_url": "https://api.github.com/users/albertnanda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertnanda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertnanda/subscriptions",
"organizations_url": "https://api.github.com/users/albertnanda/orgs",
"repos_url": "https://api.github.com/users/albertnanda/repos",
"events_url": "https://api.github.com/users/albertnanda/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertnanda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Btw, this works fine.\r\n\r\n```\r\nfrom transformers import pipeline\r\nner = pipeline(\"ner\", grouped_entities=True)\r\n\r\nsequence = \"\"\"\r\nHugging Face Inc. is a company based in New York City.\r\nIts headquarters are in DUMBO, therefore very close to the Manhattan Bridge which is visible from the window.\r\n\"\"\"\r\noutput = ner(sequence)\r\n\r\nprint(output)\r\n```\r\n\r\n```\r\n[{'entity_group': 'ORG', 'score': 0.9970663785934448, 'word': 'Hugging Face Inc'}, {'entity_group': 'LOC', 'score': 0.9993778467178345, 'word': 'New York City'}, {'entity_group': 'LOC', 'score': 0.9571147759755453, 'word': 'DUMBO'}, {'entity_group': 'LOC', 'score': 0.983814150094986, 'word': 'Manhattan Bridge'}]\r\n```",
"@devansvd That's just a single sequence, I want to pass multiple sequences i.e. list of strings.",
"@albertnanda, @devansvd, @LysandreJik, I still get this issue in `v4.2.0`, even if `padding=True` and `truncation=True`. I tried all variants of padding and truncation with and without `grouped_entities=True` and got the same error as above. Did you figure out a solution besides feeding in the narratives one by one?\r\n\r\n```\r\nnlp = pipeline(\"ner\", model=MODEL_NAME, tokenizer=TOKENIZER_NAME, grouped_entities=True)\r\nresults = nlp(narratives, padding=True, truncation=True)\r\n```",
"Hello! Could you try again on the `master` branch and let us know if it works? https://github.com/huggingface/transformers/pull/10184 was recently merged and it should fix the issue. Thanks!",
"@LysandreJik This works, but this runs the model sequentially over the list of text. Can we add batching support. It would be way faster then. Without it, this change has little significance, the only thing it does is save 1 line of code i.e \r\n```\r\n[nlp(text) for text in texts]\r\n```",
"Sure, we could look in adding batching support, that would indeed make things much faster! Would you like to try your hand at it?",
"Sure, let me see if I can add batching support.",
"Working on batching for NER pipeline in this PR - https://github.com/huggingface/transformers/pull/11251",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,621 | 1,621 | NONE | null | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
@stefan-it
I am trying to pass multiple sentences to NER pipeline, but it fails with the following error message:
```ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.```
Code to reproduce:
```
nlp = pipeline("ner")
nlp(["Some dummy text", "some more dummy text"])
````
Also, the output data structure is wrong, ex:
nlp(["City New York","City New York"])
This should return a list of dicts as per the documentation, but it returns only a singe dict.
```[{'word': 'City', 'score': 0.6329959034919739, 'entity': 'I-LOC', 'index': 1},
{'word': 'New', 'score': 0.5934403538703918, 'entity': 'I-LOC', 'index': 2},
{'word': 'York', 'score': 0.728114128112793, 'entity': 'I-LOC', 'index': 3}]
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8942/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8941 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8941/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8941/comments | https://api.github.com/repos/huggingface/transformers/issues/8941/events | https://github.com/huggingface/transformers/issues/8941 | 757,877,102 | MDU6SXNzdWU3NTc4NzcxMDI= | 8,941 | Error running source code -- import | {
"login": "RandolphShi",
"id": 24260605,
"node_id": "MDQ6VXNlcjI0MjYwNjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24260605?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RandolphShi",
"html_url": "https://github.com/RandolphShi",
"followers_url": "https://api.github.com/users/RandolphShi/followers",
"following_url": "https://api.github.com/users/RandolphShi/following{/other_user}",
"gists_url": "https://api.github.com/users/RandolphShi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RandolphShi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RandolphShi/subscriptions",
"organizations_url": "https://api.github.com/users/RandolphShi/orgs",
"repos_url": "https://api.github.com/users/RandolphShi/repos",
"events_url": "https://api.github.com/users/RandolphShi/events{/privacy}",
"received_events_url": "https://api.github.com/users/RandolphShi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, what are you trying to do?\r\n\r\nThe `modeling_utils.py` is an internal file that defines objects to be used by models. I invite you to read the documentation, especially the [quick tour](https://huggingface.co/transformers/quicktour.html).",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | Hi, I have downloaded source code of transformers and tried to run modeling_utils.py.
However, it seems that there are a lot of importing problems.
Am I running it in a wrong way?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8941/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8940 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8940/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8940/comments | https://api.github.com/repos/huggingface/transformers/issues/8940/events | https://github.com/huggingface/transformers/issues/8940 | 757,773,448 | MDU6SXNzdWU3NTc3NzM0NDg= | 8,940 | failure to use conda-forge apex with torch1.6 and --amp_backend='apex' + --fp16_opt_level O1 | {
"login": "XiangLi1999",
"id": 29054786,
"node_id": "MDQ6VXNlcjI5MDU0Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/29054786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XiangLi1999",
"html_url": "https://github.com/XiangLi1999",
"followers_url": "https://api.github.com/users/XiangLi1999/followers",
"following_url": "https://api.github.com/users/XiangLi1999/following{/other_user}",
"gists_url": "https://api.github.com/users/XiangLi1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XiangLi1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XiangLi1999/subscriptions",
"organizations_url": "https://api.github.com/users/XiangLi1999/orgs",
"repos_url": "https://api.github.com/users/XiangLi1999/repos",
"events_url": "https://api.github.com/users/XiangLi1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/XiangLi1999/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@XiangLi1999, thank you for making a separate issue for this.\r\n\r\n(note: I edited your post to format the error traceback and also edited the link to point to the relevant comment (https://github.com/huggingface/transformers/issues/8403#issuecomment-724787083) - if you click in the right upper corner of the comment - you will see an option to copy a link to that comment and not just thread.)\r\n\r\nYes, I saw that error but didn't have a chance to try to understand the cause at that time - I had a closer look now and this seems to be a `pytorch-lightning` bug - so you might have to ask via their issue tracker.\r\n\r\nI can suggest two possible solutions:\r\n\r\n1. Download pytorch-nightly which has the leak fixed so you can use native amp no problem. \r\n\r\n`pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U`\r\n\r\n2. There is also a relatively new `finetune_trainer.py` in the same directory, which uses HF trainer. Have a look and perhaps it'd work better for you.\r\n\r\nPlease let me know if one of the proposed solutions addresses your needs.\r\n\r\nAnd feel free to file a bug report with pytorch-lightning if you'd like to follow that use case through.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Linux-4.4.0-1111-aws-x86_64-with-debian-stretch-sid
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
@stas00
## Information
Model I am using: BART
The problem arises when using:
torch 1.6 + conda-forge apex w/ --fp16 --amp_backend='apex' + --fp16_opt_level O1 to run finetune.py
```
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 429, in fit
self.accelerator_backend.setup(model)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 53, in setup
model = self.trainer.precision_connector.connect(model)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/precision_connector.py", line 78, in connect
model, optimizers = self.backend.connect(model, self.trainer.optimizers)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/plugins/apex.py", line 38, in connect
self.trainer.reinit_scheduler_properties(optimizers, self.trainer.lr_schedulers)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/optimizers.py", line 143, in reinit_scheduler_properties
scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 74, in __init__
self.optimizer.step = with_counter(self.optimizer.step)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 56, in with_counter
instance_ref = weakref.ref(method.__self__)
AttributeError: 'function' object has no attribute '__self__'
```
The tasks I am working on is:
* [ ] Finetune BART on XSUM
## To reproduce
Steps to reproduce the behavior:
It is the same issue mentioned in https://github.com/huggingface/transformers/issues/8403#issuecomment-724787083
If you control find "\_\_self\_\_"
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8940/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8939 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8939/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8939/comments | https://api.github.com/repos/huggingface/transformers/issues/8939/events | https://github.com/huggingface/transformers/issues/8939 | 757,772,401 | MDU6SXNzdWU3NTc3NzI0MDE= | 8,939 | sorry I mistakenly submitted a issue twice. Plz ignore (help delete) this one. | {
"login": "XiangLi1999",
"id": 29054786,
"node_id": "MDQ6VXNlcjI5MDU0Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/29054786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XiangLi1999",
"html_url": "https://github.com/XiangLi1999",
"followers_url": "https://api.github.com/users/XiangLi1999/followers",
"following_url": "https://api.github.com/users/XiangLi1999/following{/other_user}",
"gists_url": "https://api.github.com/users/XiangLi1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XiangLi1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XiangLi1999/subscriptions",
"organizations_url": "https://api.github.com/users/XiangLi1999/orgs",
"repos_url": "https://api.github.com/users/XiangLi1999/repos",
"events_url": "https://api.github.com/users/XiangLi1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/XiangLi1999/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
- `transformers` version: 3.2.0
- Platform: Linux-4.4.0-1111-aws-x86_64-with-debian-stretch-sid
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@stas00
## Information
Model I am using (Bert, XLNet ...): BART
The problem arises when using:
*--pt16 --fp16_opt_level O1 + conda-forge apex w/ --fp16 --amp_backend='apex' to run finetune.py
Got this error:
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 429, in fit
self.accelerator_backend.setup(model)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 53, in setup
model = self.trainer.precision_connector.connect(model)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/precision_connector.py", line 78, in connect
model, optimizers = self.backend.connect(model, self.trainer.optimizers)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/plugins/apex.py", line 38, in connect
self.trainer.reinit_scheduler_properties(optimizers, self.trainer.lr_schedulers)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/optimizers.py", line 143, in reinit_scheduler_properties
scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 74, in __init__
self.optimizer.step = with_counter(self.optimizer.step)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 56, in with_counter
instance_ref = weakref.ref(method.__self__)
AttributeError: 'function' object has no attribute '__self__'
This is the same error reported in one of the reply in https://github.com/huggingface/transformers/issues/8403...
The tasks I am working on is:
* [ ] to finetune BART on XSUM.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8939/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8939/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8938 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8938/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8938/comments | https://api.github.com/repos/huggingface/transformers/issues/8938/events | https://github.com/huggingface/transformers/issues/8938 | 757,767,735 | MDU6SXNzdWU3NTc3Njc3MzU= | 8,938 | MobileBertForSequenceClassification outputs super-high logits | {
"login": "NadiaRom",
"id": 17527845,
"node_id": "MDQ6VXNlcjE3NTI3ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/17527845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NadiaRom",
"html_url": "https://github.com/NadiaRom",
"followers_url": "https://api.github.com/users/NadiaRom/followers",
"following_url": "https://api.github.com/users/NadiaRom/following{/other_user}",
"gists_url": "https://api.github.com/users/NadiaRom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NadiaRom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NadiaRom/subscriptions",
"organizations_url": "https://api.github.com/users/NadiaRom/orgs",
"repos_url": "https://api.github.com/users/NadiaRom/repos",
"events_url": "https://api.github.com/users/NadiaRom/events{/privacy}",
"received_events_url": "https://api.github.com/users/NadiaRom/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I see the same behavior when trying it out with IMDB classification. \r\nI solved it by passing `classifier_activation=True` for the `from_pretrained` function. \r\n[Documentation](https://huggingface.co/transformers/model_doc/mobilebert.html#mobilebertconfig) says it is `True` by default, however it does not seem like it. \r\n[EDIT] Apparently this changes the behavior of the pooling layer ",
"@hfawaz Thank you for solving this issue, it's very helpful",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Platform: Ubuntu 20.04
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0
- Tensorflow version (GPU?): None
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): MobileBertForSequenceClassification
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: please see an example below
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: simple text classification using documents from sec.gov
## To reproduce
Steps to reproduce the behavior:
I am training a whole-text classifier with MobileBert using MobileBertForSequenceClassification:
```{python}
from transformers import MobileBertForSequenceClassification, \
MobileBertTokenizerFast
ARCH = 'google/mobilebert-uncased'
model = MobileBertForSequenceClassification.from_pretrained(ARCH).cuda()
tokenizer = MobileBertTokenizerFast.from_pretrained(ARCH)
x = tokenizer(['def hello(): return "world"', 'This is some test'],
max_length=512,
truncation=True,
return_tensors='pt',
padding='longest')
with torch.no_grad():
l = model(**x.to(model.device)).logits
```
Resulting model outputs are extremely high:
```
tensor([[ 3289181.7500, -2371234.0000],
[ 3198336.7500, -1882639.8750]])
```
Loading model and tokenizer with Auto- classes gives the same result.
When using pooled output from ModileBertModel with a custom linear head (BatchNorm1d+Dropout+Linear) everything works fine.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect logits to be near [-3, 3], but not in 6-7 digits.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8938/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8937 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8937/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8937/comments | https://api.github.com/repos/huggingface/transformers/issues/8937/events | https://github.com/huggingface/transformers/issues/8937 | 757,745,425 | MDU6SXNzdWU3NTc3NDU0MjU= | 8,937 | Gradients of BERT layer outputs to inputs | {
"login": "NitinTitus",
"id": 5567628,
"node_id": "MDQ6VXNlcjU1Njc2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5567628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NitinTitus",
"html_url": "https://github.com/NitinTitus",
"followers_url": "https://api.github.com/users/NitinTitus/followers",
"following_url": "https://api.github.com/users/NitinTitus/following{/other_user}",
"gists_url": "https://api.github.com/users/NitinTitus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NitinTitus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NitinTitus/subscriptions",
"organizations_url": "https://api.github.com/users/NitinTitus/orgs",
"repos_url": "https://api.github.com/users/NitinTitus/repos",
"events_url": "https://api.github.com/users/NitinTitus/events{/privacy}",
"received_events_url": "https://api.github.com/users/NitinTitus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,607 | 1,607 | 1,607 | NONE | null | I am trying to find the gradient of the output of a layer of BERT to its inputs, token wise. But I keep getting the error saying: 'RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.' Below is the code snippet:
for count, data in enumerate(iter(data_loader)):
input_ids=torch.squeeze(data['input_ids'],dim=0)
attention_mask=torch.squeeze(data['attention_mask'],dim=0)
last_hidden_state, pooled_output, hidden_states = bert_model(input_ids=input_ids,attention_mask=attention_mask)
bert_layer_i_output=hidden_states[i][0]
print(bert_layer_i_output.shape)
bert_layer_j_output=hidden_states[j][0]
#print(torch.autograd.grad(bert_layer_j_output,bert_layer_i_output,retain_graph=True, create_graph=True))
for k in range(bert_layer_i_output.shape[0]):
gradient=torch.autograd.grad(bert_layer_j_output[k],bert_layer_i_output[k],grad_outputs=torch.ones_like(bert_layer_j_output[k]))
print(gradient.shape)
print(torch.norm(gradient))
break
break
Below is the stack trace of the error:
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused)
202 return Variable._execution_engine.run_backward(
203 outputs, grad_outputs_, retain_graph, create_graph,
--> 204 inputs, allow_unused)
205
206
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
Am i doing something wrong? Ideally both the tensors should be part of the same computational graph right? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8937/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8936 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8936/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8936/comments | https://api.github.com/repos/huggingface/transformers/issues/8936/events | https://github.com/huggingface/transformers/issues/8936 | 757,715,940 | MDU6SXNzdWU3NTc3MTU5NDA= | 8,936 | Unexpected behavior when using TFRoberta model inside tf.keras model | {
"login": "amir-ghasemi",
"id": 19309204,
"node_id": "MDQ6VXNlcjE5MzA5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/19309204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amir-ghasemi",
"html_url": "https://github.com/amir-ghasemi",
"followers_url": "https://api.github.com/users/amir-ghasemi/followers",
"following_url": "https://api.github.com/users/amir-ghasemi/following{/other_user}",
"gists_url": "https://api.github.com/users/amir-ghasemi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amir-ghasemi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amir-ghasemi/subscriptions",
"organizations_url": "https://api.github.com/users/amir-ghasemi/orgs",
"repos_url": "https://api.github.com/users/amir-ghasemi/repos",
"events_url": "https://api.github.com/users/amir-ghasemi/events{/privacy}",
"received_events_url": "https://api.github.com/users/amir-ghasemi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello !\r\n\r\nThanks for reporting this. Can you provide a colab in order for us to reproduce your use case?",
"Thanks @jplu . Please see the [Colab notebook](https://colab.research.google.com/drive/1qDpqEc4qbeuoQjVEpDx88dXIXnJ6bwXh?usp=sharing).\r\n\r\nAs you can see, both model1 and model2 have the exact same number of parameters and are initialized using the same pretrained roberta-base model. Yet, first one trains well and reaches val_accuracy of 0.9350 after one epoch while the second one (using transformers model within a tf.keras model) is stuck.",
"This issue does not seem to be isolated to TFRoberta. Just tried with TFDistilBertForSequenceClassification and the outcome is similar. Using the transformers model directly works fine whereas embedding it within a tf.keras model (while adding just an input layer and passing the logits directly to output) fails.",
"@amir-ghasemi Can you try on the master version with this update, and let me know if you still get the issue:\r\n```\r\ninput_ids = tf.keras.Input(shape=(128,), dtype='int32')\r\nattention_mask = tf.keras.Input(shape=(128, ), dtype='int32')\r\n\r\ntransformer = TFRobertaForSequenceClassification.from_pretrained(\"roberta-base\", num_labels=6)\r\nencoded = transformer({\"input_ids\": input_ids, \"attention_mask\": attention_mask})\r\nlogits = encoded[0]\r\n\r\nmodel = tf.keras.models.Model(inputs = {\"input_ids\": input_ids, \"attention_mask\": attention_mask}, outputs = logits)\r\n```",
"Thanks @jplu ! Tried with the master branch and feeding the input using the dict. That did the trick! Closing the issue."
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment
- `transformers` version: tried with both 4.0.0 and 3.5.0
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3.0 GPU
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
@jplu
## Information
Model I am using (Bert, XLNet ...): TFRoberta
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
Sentence classification
## To reproduce
I am trying to import a pretrained TFRoberta model and extend it with a few layers for classification using tensorflow keras. When I directly use transformers model (Method 1), the model trains well and reaches a validation accuracy of 0.93 after 1 epoch. However, when trying to use the model as a layer within a tf.keras model (Method 2), the model can't get above 0.32 accuracy. As far as I can tell based on the documentation, the two approaches should be equivalent. My goal is to get Method 2 working so that I can add more layers to it instead of directly using the logits produced by the transformers' classifier head but I'm stuck at this stage.
```
import tensorflow as tf
from transformers import TFRobertaForSequenceClassification
```
Method 1:
```
model = TFRobertaForSequenceClassification.from_pretrained("roberta-base", num_labels=6)
```
Method 2:
```
input_ids = tf.keras.Input(shape=(128,), dtype='int32')
attention_mask = tf.keras.Input(shape=(128, ), dtype='int32')
transformer = TFRobertaForSequenceClassification.from_pretrained("roberta-base", num_labels=6)
encoded = transformer([input_ids, attention_mask])
logits = encoded[0]
model = tf.keras.models.Model(inputs = [input_ids, attention_mask], outputs = logits)
```
Rest of the code for either method is identical,
```
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy('accuracy')])
```
## Expected behavior
Similar validation loss and accuracy for both methods.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8936/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8935 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8935/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8935/comments | https://api.github.com/repos/huggingface/transformers/issues/8935/events | https://github.com/huggingface/transformers/issues/8935 | 757,712,340 | MDU6SXNzdWU3NTc3MTIzNDA= | 8,935 | phase level tokenizer | {
"login": "graceyangfan",
"id": 17508779,
"node_id": "MDQ6VXNlcjE3NTA4Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/17508779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/graceyangfan",
"html_url": "https://github.com/graceyangfan",
"followers_url": "https://api.github.com/users/graceyangfan/followers",
"following_url": "https://api.github.com/users/graceyangfan/following{/other_user}",
"gists_url": "https://api.github.com/users/graceyangfan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/graceyangfan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/graceyangfan/subscriptions",
"organizations_url": "https://api.github.com/users/graceyangfan/orgs",
"repos_url": "https://api.github.com/users/graceyangfan/repos",
"events_url": "https://api.github.com/users/graceyangfan/events{/privacy}",
"received_events_url": "https://api.github.com/users/graceyangfan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I has write a custom tokenizer to rewrite the _tokenize method,it seems work well for chinese.\r\n```\r\nfrom transformers import *\r\nimport jieba\r\njieba.initialize()\r\nclass CostoumToken(BertTokenizer):\r\n def __init__(self,vocab_path,pre_token_method=lambda x:\" \".join(jieba.cut(x,HMM=False))):\r\n super().__init__(vocab_path)\r\n self.pre_token_method=pre_token_method\r\n def _tokenize(self, text):\r\n text=self.pre_token_method(text)\r\n split_tokens=text.split(\" \")\r\n return split_tokens\r\n```\r\n##############################################################\r\ntesting\r\n##############################################################\r\n```\r\ntoken.tokenize(\"中国很好\")\r\nout:['中国', '很', '好']\r\ntoken.encode(\"中国很好\")\r\nout:[2, 13587, 2422, 1861, 3]\r\ntoken.decode([2, 13587, 2422, 1861, 3])\r\nout:'[CLS] 中国 很 好 [SEP]'\r\n\r\n```\r\n\r\n",
"Have you tried playing around with the [`tokenize_chinese_chars` argument of the `BertTokenizer`?](https://huggingface.co/transformers/model_doc/bert.html?highlight=tokenize_chinese_chars#transformers.BertTokenizer)",
"@LysandreJik thanks for your replay,I test it ,and get the answer:\r\n```\r\ntoken.tokenize(\"中国很好\")\r\nout:['中', '国', '很', '好']\r\n```\r\nit seems that BertTokenizer always tokenize the sentence on word level",
"what task do you apply?\r\ni had tried the idea as you implement in your CostoumToken\r\nto a keyword find algorithm in \r\nhttps://github.com/MaartenGr/KeyBERT\r\nto find chinese keywords .\r\nthe conclusion seems meaningful, as expected\r\nthe phrase level bert embedding encoded keep the semantic from char level.\r\nif you take task such as phrase fill or prediction ?\r\n",
"@svjack here is the code of tensorflow version to a phase level chinese pre_trained bert:\r\n[https://github.com/ZhuiyiTechnology/WoBERT](url)\r\nIn order to find the true phase in the dictionary,the dictionary must have thses phases where phases like \"中国“ should be treated as a whole.\r\nI am trying to replace some phrases and use the phase level pretrained model to predict which phase in the dictionary can replace them,it seems hard to realise on the word level.\r\n",
"> @svjack here is the code of tensorflow version to a phase level chinese pre_trained bert:\n> \n> [https://github.com/ZhuiyiTechnology/WoBERT](url)\n> \n> In order to find the true phase in the dictionary,the dictionary must have thses phases where phases like \"中国“ should be treated as a whole.\n> \n> I am trying to replace some phrases and use the phase level pretrained model to predict which phase in the dictionary can replace them,it seems hard to realise on the word level.\n> \n> \n\ni will try this project later.",
"> @svjack here is the code of tensorflow version to a phase level chinese pre_trained bert:\n> [https://github.com/ZhuiyiTechnology/WoBERT](url)\n> In order to find the true phase in the dictionary,the dictionary must have thses phases where phases like \"中国“ should be treated as a whole.\n> I am trying to replace some phrases and use the phase level pretrained model to predict which phase in the dictionary can replace them,it seems hard to realise on the word level.\n> \n\nit simply tokenize text by jieba firstly and serlize it and use this as vocab_file like transformers project do, you can also set this param in BertTokenizer class init step, but a problem make me confused is \ntokenizer conclusion is not unique for the best, but with a probability evidence. \nand select the \"best\" as result, but when it comes to text with different as trained input may induce a different tokenized list with same substring contain it. but its \nalso the \"best\" . So this char to word embedding average can not be go back to retrieve best combine of chars i.e. phares in chinese . which is not suitable in nlp argument task.",
"many sentence piece tokenizer such as xlm can tackle this kind of problems.",
"@svjack thanks for your work.It's a common problem to split chinese phases,lots of researchers are still arguing char based or phase based split in chinese NLP.I will try xlm.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | # 🚀 Feature request
A tokenizer to encode sentences on phase level
## Motivation
The transformer Tokenizer always tokenize the sentence on word level,which might be good for English,however it might not so in Chinese,for example, sports has single meaning in English,when its translation 运动 is split to 运 and 动,we have no idea what it means.There are also many Technical terms in chinses larger than one word should not be split
The tokenizer has additional_special_tokens, but I am not sure it can solve the phase level token
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8935/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8934 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8934/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8934/comments | https://api.github.com/repos/huggingface/transformers/issues/8934/events | https://github.com/huggingface/transformers/pull/8934 | 757,672,788 | MDExOlB1bGxSZXF1ZXN0NTMzMDExNDkz | 8,934 | Updating outdated fairseq checkpoint to HF script | {
"login": "machelreid",
"id": 42187963,
"node_id": "MDQ6VXNlcjQyMTg3OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/42187963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/machelreid",
"html_url": "https://github.com/machelreid",
"followers_url": "https://api.github.com/users/machelreid/followers",
"following_url": "https://api.github.com/users/machelreid/following{/other_user}",
"gists_url": "https://api.github.com/users/machelreid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/machelreid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/machelreid/subscriptions",
"organizations_url": "https://api.github.com/users/machelreid/orgs",
"repos_url": "https://api.github.com/users/machelreid/repos",
"events_url": "https://api.github.com/users/machelreid/events{/privacy}",
"received_events_url": "https://api.github.com/users/machelreid/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | CONTRIBUTOR | null | The current fairseq checkpoint to HF script is outdated, not being compatible with the newly introduced `hydra` config and fairseq's new PyTorch hub interface. In addition to this, added one more argument (`--data-dir`) for custom RoBERTa models, and modified the `--classification_head` argument to take in a string rather than `store_true`. This is to reflect (a more likely case) of custom classification heads, rather than the most popular (and already available) `mnli` head. Added an import of `os` (if it counts as a "dependency").
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @stefan-it @myleott @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8934/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8934",
"html_url": "https://github.com/huggingface/transformers/pull/8934",
"diff_url": "https://github.com/huggingface/transformers/pull/8934.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8934.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8933 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8933/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8933/comments | https://api.github.com/repos/huggingface/transformers/issues/8933/events | https://github.com/huggingface/transformers/issues/8933 | 757,668,325 | MDU6SXNzdWU3NTc2NjgzMjU= | 8,933 | Relative Attention Bias not initialized for T5ForConditionalGeneration in version 4.0.0 | {
"login": "gsarti",
"id": 16674069,
"node_id": "MDQ6VXNlcjE2Njc0MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/16674069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsarti",
"html_url": "https://github.com/gsarti",
"followers_url": "https://api.github.com/users/gsarti/followers",
"following_url": "https://api.github.com/users/gsarti/following{/other_user}",
"gists_url": "https://api.github.com/users/gsarti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsarti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsarti/subscriptions",
"organizations_url": "https://api.github.com/users/gsarti/orgs",
"repos_url": "https://api.github.com/users/gsarti/repos",
"events_url": "https://api.github.com/users/gsarti/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsarti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @gsarti, \r\n\r\nyeah there was a tiny bug in T5 previously. T5 actually never had relative positional encodings for the EncoderDecoderLayer, so the unnecessary weight was deleted after 3.5.0. This should not really affect the performance however and is no problem now, see: https://github.com/huggingface/transformers/pull/8518",
"Thank you for the clarification!"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux 18.04
- Python version: 3.6.9
- PyTorch version (GPU?): 1.70 (True)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using: T5ForConditionalGeneration, specifically the "allenai/unifiedqa-t5-large" checkpoint from the model hub
The problem arises when I try to load the checkpoint following standard loading procedures under `transformers==4.0.0`. The same doesn't happen in version `3.5.0`.
## To reproduce
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name_or_path = "allenai/unifiedqa-t5-large"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
# Warnings in version 4.0, not in 3.5.0 an preceding ones
model = T5ForConditionalGeneration.from_pretrained(model_name_or_path)
```
```
Some weights of the model checkpoint at allenai/unifiedqa-t5-large were not used when initializing T5ForConditionalGeneration: ['decoder.blo
ck.0.layer.1.EncDecAttention.relative_attention_bias.weight']
- This IS expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model trained on another task or with another
architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly ident
ical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
## Expected behavior
A consistent behavior across versions, either always or never raising the warning at loading time.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8933/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8932 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8932/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8932/comments | https://api.github.com/repos/huggingface/transformers/issues/8932/events | https://github.com/huggingface/transformers/issues/8932 | 757,646,121 | MDU6SXNzdWU3NTc2NDYxMjE= | 8,932 | Documentation License Query | {
"login": "darigovresearch",
"id": 30328618,
"node_id": "MDQ6VXNlcjMwMzI4NjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/30328618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/darigovresearch",
"html_url": "https://github.com/darigovresearch",
"followers_url": "https://api.github.com/users/darigovresearch/followers",
"following_url": "https://api.github.com/users/darigovresearch/following{/other_user}",
"gists_url": "https://api.github.com/users/darigovresearch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/darigovresearch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/darigovresearch/subscriptions",
"organizations_url": "https://api.github.com/users/darigovresearch/orgs",
"repos_url": "https://api.github.com/users/darigovresearch/repos",
"events_url": "https://api.github.com/users/darigovresearch/events{/privacy}",
"received_events_url": "https://api.github.com/users/darigovresearch/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The documentation is also under the Apache License (version 2.0). Hope that helps!\r\nWould love some flashcards for Transformers :-)",
"@sgugger thanks for letting us know! An initial set based on your glossary will follow shortly as a start.\r\n\r\nIs there any way to update the docs so that the license is in the footer?\r\n\r\nI am happy to make a pull request if given context. We think having it there will encourage other people as well to make other educational content based on the docs.\r\n\r\nAlso could you take a look at this issue as it may be relevant but was auto closed?\r\n\r\nhttps://github.com/huggingface/transformers/issues/6140",
"I'll work on adding the copyright to individual files and the footer of the docs on Monday. For the issue you mention, I'm not sure what you mean: this points to complete text of the Apache v2 license (as is done for TensorFlow for instance, see [here](https://github.com/tensorflow/tensorflow/blob/master/LICENSE). The snippet is then copied on the code files with the year and authors filled properly (and soon-to-be doc files).",
"We believe the original post meant that lines 179-188 of your license file suggests that in the License file you need to add or adjust line 190 to have the copyright year & the name of your organisation. The tensorflow license has done this as the first line of their license file.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | CONTRIBUTOR | null | We were looking to make some educational material based on the documentation & it wasn't clear what the license is for the documentation.
We see that the repository as a whole is under an Apache license, but the docs page is not explicit on what license the docs with a dedicated license page or in the footer.
Could you please advise?
We have made some other educational material from docs pages for reference (see the flashcards section of https://www.darigovresearch.com/) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8932/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8931 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8931/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8931/comments | https://api.github.com/repos/huggingface/transformers/issues/8931/events | https://github.com/huggingface/transformers/pull/8931 | 757,583,334 | MDExOlB1bGxSZXF1ZXN0NTMyOTM2NzQ5 | 8,931 | Fix typo for `modeling_bert` import resulting in ImportError | {
"login": "machelreid",
"id": 42187963,
"node_id": "MDQ6VXNlcjQyMTg3OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/42187963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/machelreid",
"html_url": "https://github.com/machelreid",
"followers_url": "https://api.github.com/users/machelreid/followers",
"following_url": "https://api.github.com/users/machelreid/following{/other_user}",
"gists_url": "https://api.github.com/users/machelreid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/machelreid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/machelreid/subscriptions",
"organizations_url": "https://api.github.com/users/machelreid/orgs",
"repos_url": "https://api.github.com/users/machelreid/repos",
"events_url": "https://api.github.com/users/machelreid/events{/privacy}",
"received_events_url": "https://api.github.com/users/machelreid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
Self-explanatory ;) - Fixes a typo resulting in an `ImportError` in the convert RoBERTa from fairseq to HF - Hope it helps!
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8931/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8931",
"html_url": "https://github.com/huggingface/transformers/pull/8931",
"diff_url": "https://github.com/huggingface/transformers/pull/8931.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8931.patch",
"merged_at": 1607180258000
} |
https://api.github.com/repos/huggingface/transformers/issues/8930 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8930/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8930/comments | https://api.github.com/repos/huggingface/transformers/issues/8930/events | https://github.com/huggingface/transformers/pull/8930 | 757,438,576 | MDExOlB1bGxSZXF1ZXN0NTMyODE2MDIw | 8,930 | [seq2seq] document the caveat of leaky native amp | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | the native amp leak will be fixed in pt18 (already available in pt-nightly) - this PR documents this caveat and proposes to use apex for pt < 1.8.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8930/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8930",
"html_url": "https://github.com/huggingface/transformers/pull/8930",
"diff_url": "https://github.com/huggingface/transformers/pull/8930.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8930.patch",
"merged_at": 1607125416000
} |
https://api.github.com/repos/huggingface/transformers/issues/8929 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8929/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8929/comments | https://api.github.com/repos/huggingface/transformers/issues/8929/events | https://github.com/huggingface/transformers/pull/8929 | 757,404,939 | MDExOlB1bGxSZXF1ZXN0NTMyNzg3NjM2 | 8,929 | Don't pass in token_type_ids to BART for GLUE | {
"login": "ethanjperez",
"id": 6402205,
"node_id": "MDQ6VXNlcjY0MDIyMDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6402205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethanjperez",
"html_url": "https://github.com/ethanjperez",
"followers_url": "https://api.github.com/users/ethanjperez/followers",
"following_url": "https://api.github.com/users/ethanjperez/following{/other_user}",
"gists_url": "https://api.github.com/users/ethanjperez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethanjperez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethanjperez/subscriptions",
"organizations_url": "https://api.github.com/users/ethanjperez/orgs",
"repos_url": "https://api.github.com/users/ethanjperez/repos",
"events_url": "https://api.github.com/users/ethanjperez/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethanjperez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the fix @ethanjperez ! Looks good to me!"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
Without this fix, training a `BARTForSequenceClassification` model with `run_pl_glue.py` gives `TypeError: forward() got an unexpected keyword argument 'token_type_ids'`, because BART does not have token_type_ids. I've solved this issue in the same way as it's solved for the "distilbert" model, and I can train BART models on SNLI without errors now.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8929/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8929",
"html_url": "https://github.com/huggingface/transformers/pull/8929",
"diff_url": "https://github.com/huggingface/transformers/pull/8929.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8929.patch",
"merged_at": 1607179937000
} |
https://api.github.com/repos/huggingface/transformers/issues/8927 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8927/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8927/comments | https://api.github.com/repos/huggingface/transformers/issues/8927/events | https://github.com/huggingface/transformers/issues/8927 | 757,381,750 | MDU6SXNzdWU3NTczODE3NTA= | 8,927 | run_glue.py fails with RoBERTa but succeeds with other models | {
"login": "yonatanbitton",
"id": 26148975,
"node_id": "MDQ6VXNlcjI2MTQ4OTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/26148975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonatanbitton",
"html_url": "https://github.com/yonatanbitton",
"followers_url": "https://api.github.com/users/yonatanbitton/followers",
"following_url": "https://api.github.com/users/yonatanbitton/following{/other_user}",
"gists_url": "https://api.github.com/users/yonatanbitton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonatanbitton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonatanbitton/subscriptions",
"organizations_url": "https://api.github.com/users/yonatanbitton/orgs",
"repos_url": "https://api.github.com/users/yonatanbitton/repos",
"events_url": "https://api.github.com/users/yonatanbitton/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonatanbitton/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"When **importing** transformers (instead of using the source) the problem does not occur.",
"Pinging the Trainer master @sgugger!",
"This looks like a problem on the CUDA initialization in your enviromnent. The command runs fine on my side. \r\n\r\n> When **importing** transformers (instead of using the source) the problem does not occur.\r\n\r\nWhat do you mean exactly by this?",
"> This looks like a problem on the CUDA initialization in your enviromnent. The command runs fine on my side.\r\n> \r\n> > When **importing** transformers (instead of using the source) the problem does not occur.\r\n> \r\n> What do you mean exactly by this?\r\n\r\nThe note here: https://github.com/huggingface/transformers/tree/master/examples#important-note\r\nsuggests to install the library from source. When I do it (with git clone) it doesn't work - I receive the error described here.\r\n\r\nOn the other hand, when I use the 'transformers' from 'pip install transformers', it does work.\r\n\r\nI'm not sure if this specific difference causes the error only in my environment or not. ",
"This issue has been stale for 1 month."
] | 1,607 | 1,618 | 1,618 | NONE | null | ## Environment info
I'm following this instructions: https://github.com/huggingface/transformers/tree/master/examples, meaning I installed the library from source.
- `transformers` version: 4.1.0.dev0
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): 1.7.0
- Using GPU in script?: Yes, Tesla K80
- Using distributed or parallel set-up in script?: running `CUDA_VISIBLE_DEVICES=0 python run_glue.py`
-->
## The problem
I'm running the official `run_glue.py` code, with the command and arguments given here: https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-pytorch-version
When I use BERT - it **succeeds**.
For example, BERT:
```CUDA_VISIBLE_DEVICES=0 python run_glue.py --task_name cola --output_dir results/normal/bert/cola/ --cache_dir cache/normal/bert --model_name_or_path bert-base-cased --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --do_predict --overwrite_output_dir```
And I receive score that makes sense:
```
[d]$ cat /transformers/examples/text-classification/results/normal/bert/cola/eval_results_cola.txt
eval_loss = 0.518086314201355
eval_matthews_correlation = 0.572739655014278
epoch = 3.0
```
When I use RoBERTA, it **fails** with a stacktrace:
```CUDA_VISIBLE_DEVICES=0 python run_glue.py --task_name cola --output_dir results/normal/roberta/cola/ --cache_dir cache/normal/roberta --model_name_or_path roberta-base --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --do_predict --overwrite_output_dir```
The error message:
```python
[INFO|trainer.py:674] 2020-12-04 20:23:30,937 >> Total optimization steps = 804
0%| | 0/804 [00:00<?, ?it/s]/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [33,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [33,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
...
Traceback (most recent call last):
File "/transformers/examples/text-classification/run_yonatan.py", line 464, in <module>
main()
File "/transformers/examples/text-classification/run_yonatan.py", line 399, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/transformers/src/transformers/trainer.py", line 767, in train
tr_loss += self.training_step(model, inputs)
File "/transformers/src/transformers/trainer.py", line 1096, in training_step
loss = self.compute_loss(model, inputs)
File "/transformers/src/transformers/trainer.py", line 1120, in compute_loss
outputs = model(**inputs)
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/src/transformers/models/roberta/modeling_roberta.py", line 1029, in forward
return_dict=return_dict,
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/src/transformers/models/roberta/modeling_roberta.py", line 717, in forward
return_dict=return_dict,
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/src/transformers/models/roberta/modeling_roberta.py", line 450, in forward
output_attentions,
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/src/transformers/models/roberta/modeling_roberta.py", line 368, in forward
output_attentions=output_attentions,
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/src/transformers/models/roberta/modeling_roberta.py", line 302, in forward
output_attentions,
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/src/transformers/models/roberta/modeling_roberta.py", line 184, in forward
mixed_query_layer = self.query(hidden_states)
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/functional.py", line 1692, in linear
output = input.matmul(weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
0%| | 0/804 [00:00<?, ?it/s]
```
I've searched for related solutions but didn't find any relevant solution (https://github.com/huggingface/transformers/issues?q=CUBLAS_STATUS_ALLOC_FAILED).
What am I missing?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8927/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8926 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8926/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8926/comments | https://api.github.com/repos/huggingface/transformers/issues/8926/events | https://github.com/huggingface/transformers/pull/8926 | 757,252,419 | MDExOlB1bGxSZXF1ZXN0NTMyNjU5MjQw | 8,926 | [ci] skip doc jobs - circleCI is not reliable - disable skip for now | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for investigating this!",
"The thread discussing getting a reliable range of changes on CircleCI is here:\r\nhttps://discuss.circleci.com/t/pipeline-git-base-revision-is-completely-unreliable/38301\r\n"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | We can't do reliable skipping if we can't get a reliable range of changes and cirlcleCI is all over
e.g. in this PR https://github.com/huggingface/transformers/pull/8918 it changed `pipeline.git.base_revision` **on every commit** resulting only the changes from last commit appearing as a change for the whole PR, which is very bad, since the PR could be failing tests, but the last commit's changes in doc file only will make it appear that everything is green, which could be very misleading.
I wasn't able to reproduce this yet another edge case (see attempts below), but we clearly have that happened in #8918
So this PR disables the magic until I hope I get a solution from circleCI devs which we are discussing via their support.
I'm leaving the printouts in place to continue diagnosing the issue.
It could also be that we won't be able to do that if we don't find a reliable to way to get such a simple information from circleCI, then I will remove it completely.
Thank you for bearing with me, as this is a nice-to-have but not an essential feature.
@LysandreJik, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8926/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8926",
"html_url": "https://github.com/huggingface/transformers/pull/8926",
"diff_url": "https://github.com/huggingface/transformers/pull/8926.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8926.patch",
"merged_at": 1607105623000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.