url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/7921 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7921/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7921/comments | https://api.github.com/repos/huggingface/transformers/issues/7921/events | https://github.com/huggingface/transformers/pull/7921 | 725,227,532 | MDExOlB1bGxSZXF1ZXN0NTA2NDkzNzQx | 7,921 | [testing] experiment with a different way of skipping torch-only test modules | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great! \r\n\r\nDo you want for us to make this a \"model\" file first, merge it, see how it feels and then replicate to the rest? Or should I just proceed with the rest?\r\n",
"I think you ca just proceed.",
"Bummer! If I make it into a function and thus remove `if ...` - isort and flake now complain:\r\n```\r\n$ flake8 tests\r\ntests/test_modeling_bart.py:37:1: E402 module level import not at top of file\r\ntests/test_modeling_bart.py:39:1: E402 module level import not at top of file\r\ntests/test_modeling_bart.py:57:1: E402 module level import not at top of file\r\n$ isort --check-only tests\r\nERROR: /mnt/nvme1/code/huggingface/transformers-torch-req/tests/module_skip_pytorch.py Imports are incorrectly sorted and/or formatted.\r\n```\r\nso that would require adding #noqa to all the subsequent imports ;( which leaves the code ugly just in a different way.\r\n\r\nThese tools have so little flexibility. They are supposed to make things better but lead to a much uglier code :(\r\n",
"meh! I call this experiment a failure thanks to `make quality` oppression.",
"I will just leave the helper I wrote here, in case someone figures out a magical way to solve the ugliness. \r\n\r\n```\r\ndef test_module_skip_require_pytorch():\r\n \"\"\"\r\n Call this one on top of test module to skip the whole module if pytorch is not available:\r\n test_module_skip_require_pytorch()\r\n \"\"\"\r\n if not _torch_available:\r\n raise unittest.SkipTest(\"Skip the whole module as it requires pytorch\")\r\n```"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This is an experiment.
I was getting irked by `require_pytorch` followed by yet another `if is_torch_available()` code in many test modules, specifically this part:
```
@require_torch
class BARTModelTest(ModelTesterMixin, unittest.TestCase):
all_model_classes = (
(BartModel, BartForConditionalGeneration, BartForSequenceClassification, BartForQuestionAnswering)
if is_torch_available()
else ()
)
all_generative_model_classes = (BartForConditionalGeneration,) if is_torch_available() else ()
```
`require_torch` doesn't stop the parser from compiling the rest of the code, so the ugly workaround is used.
I tried to find a better solution, to tell the parser to ignore the whole class, since we did tell it to skip it - but didn't succeed.
But then I noticed that all of the module classes/tests requires pytorch, so I thought why not skip the whole module and not need to repeatedly ask if pytorch is available. That is:
```
if not is_torch_available():
raise unittest.SkipTest("Skip the whole module as it requires pytorch")
```
and then we can code worry-free, removing any torch checks. We can then alias this whole thing in `testing_utils.py` and call as something like:
```
from testing_utils import skip_module_require_torch
skip_module_require_torch()
```
This PR is such possible solution applied to just one pure pytorch test module. To see it in action, run:
```
USE_TF=1 pytest tests/test_modeling_bart.py
```
The only drawback is that it doesn't count/report any of the skipped tests, so we get just:
```
collected 0 items / 1 skipped
```
from pytest. But this will only happen on _tf CI job, so it doesn't matter anyway.
We can do exactly the same for tf-only tests, with its own `skip_module_require_tf`.
As a bonus the test suite will run marginally faster for those pt/tf-only jobs, as it won't need to load/parse any modules - should be a very negligible improvement.
The current way is just fine. But I thought I'd share my experiment in case perhaps it'd lead to a more readable code.
Thank you for reading.
@LysandreJik, @sgugger, @sshleifer, @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7921/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/7921/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7921",
"html_url": "https://github.com/huggingface/transformers/pull/7921",
"diff_url": "https://github.com/huggingface/transformers/pull/7921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7921.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7920/comments | https://api.github.com/repos/huggingface/transformers/issues/7920/events | https://github.com/huggingface/transformers/issues/7920 | 725,121,971 | MDU6SXNzdWU3MjUxMjE5NzE= | 7,920 | what's the values of start_positon and end_position while the answer is impossible in run_squad.py | {
"login": "ppyu",
"id": 32732750,
"node_id": "MDQ6VXNlcjMyNzMyNzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/32732750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ppyu",
"html_url": "https://github.com/ppyu",
"followers_url": "https://api.github.com/users/ppyu/followers",
"following_url": "https://api.github.com/users/ppyu/following{/other_user}",
"gists_url": "https://api.github.com/users/ppyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ppyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ppyu/subscriptions",
"organizations_url": "https://api.github.com/users/ppyu/orgs",
"repos_url": "https://api.github.com/users/ppyu/repos",
"events_url": "https://api.github.com/users/ppyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ppyu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,608 | 1,608 | NONE | null | # ❓ Questions & Help
### Details
I want to know what's the values of `start_positon` and `end_position` while the answer is **impossible** in `run_squad.py`.
`start_positions = end_postions = -1` OR `start_positions = end_postions = 0` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7920/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7919/comments | https://api.github.com/repos/huggingface/transformers/issues/7919/events | https://github.com/huggingface/transformers/pull/7919 | 725,000,493 | MDExOlB1bGxSZXF1ZXN0NTA2Mjk4OTkw | 7,919 | Expose the Flax code quality problems | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,605 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR is just there to expose the problems related to the code quality with objects introduced in the flax PR. #7914 contains a fix to a few of them, maybe all. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7919/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7919",
"html_url": "https://github.com/huggingface/transformers/pull/7919",
"diff_url": "https://github.com/huggingface/transformers/pull/7919.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7919.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7918/comments | https://api.github.com/repos/huggingface/transformers/issues/7918/events | https://github.com/huggingface/transformers/pull/7918 | 724,997,892 | MDExOlB1bGxSZXF1ZXN0NTA2Mjk2NzM4 | 7,918 | Add Flax dummy objects | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
Following the first JAX models, this PR adds the dummy objects to make sure the library always has the same objects available.
cc @mfuntowicz for information
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7918/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7918",
"html_url": "https://github.com/huggingface/transformers/pull/7918",
"diff_url": "https://github.com/huggingface/transformers/pull/7918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7918.patch",
"merged_at": 1603194349000
} |
https://api.github.com/repos/huggingface/transformers/issues/7917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7917/comments | https://api.github.com/repos/huggingface/transformers/issues/7917/events | https://github.com/huggingface/transformers/pull/7917 | 724,983,176 | MDExOlB1bGxSZXF1ZXN0NTA2Mjg0MTgw | 7,917 | New run glue script | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Should we start thinking about automating the creation of the metadata block for the model's model card?\r\n\r\nhere for instance we'd already have this info:\r\n```\r\n---\r\ndatasets:\r\n- mrpc\r\nmetrics:\r\n- f1\r\nfinetuned_from: bert-base-cased\r\n---\r\n```",
"We could think of something like that and add a blank model card to be completed by the user in the final checkpoint. We could also include the results of the last evaluation if there is one."
] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR cleans up the `run_glue.py` script to use the Datasets library. Along the way it adds a few fixes in Trainer. The script supports all glue tasks as well as custom user tasks (passed along with a training and validation file in csv or json format). It has been tested on the following setups:
- single GPU
- multi-GPU with DataParallel
- multi-GPU with DistributedDataParallel
- TPU
The README has been updated to reflect the changes, there is just one breaking change from before which is that `data_dir` is not an accepted argument anymore (since Datasets will take care of downloading the data files). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7917/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7917",
"html_url": "https://github.com/huggingface/transformers/pull/7917",
"diff_url": "https://github.com/huggingface/transformers/pull/7917.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7917.patch",
"merged_at": 1603381343000
} |
https://api.github.com/repos/huggingface/transformers/issues/7916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7916/comments | https://api.github.com/repos/huggingface/transformers/issues/7916/events | https://github.com/huggingface/transformers/issues/7916 | 724,934,592 | MDU6SXNzdWU3MjQ5MzQ1OTI= | 7,916 | TypeError: __init__() got an unexpected keyword argument 'vocab_file' in transformers/tokenization_gpt2.py", line 380 | {
"login": "memray",
"id": 4197249,
"node_id": "MDQ6VXNlcjQxOTcyNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/memray",
"html_url": "https://github.com/memray",
"followers_url": "https://api.github.com/users/memray/followers",
"following_url": "https://api.github.com/users/memray/following{/other_user}",
"gists_url": "https://api.github.com/users/memray/gists{/gist_id}",
"starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/memray/subscriptions",
"organizations_url": "https://api.github.com/users/memray/orgs",
"repos_url": "https://api.github.com/users/memray/repos",
"events_url": "https://api.github.com/users/memray/events{/privacy}",
"received_events_url": "https://api.github.com/users/memray/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"same issue",
"Hello! I think this is due to a mismatch between your `transformers` and `tokenizers` versions. `transformers` version v3.3.1 expects `tokenizers == 0.8.1.rc2`.\r\n\r\nIf you want to use `tokenizers == 0.9.2` you should work on the current `master` branch or wait for version v3.4.0 which should be released sometimes today.",
"Thank you! I upgraded both and it works."
] | 1,603 | 1,604 | 1,604 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- `tokenizers` version: 0.9.2
- Platform: Linux-3.10.0-1062.4.1.el7.x86_64-x86_64-with-redhat-7.7-Maipo
- Python version: 3.7.6
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help @mfuntowicz
## Information
Model I am using (Bert, XLNet ...): RoBERTa-base
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
fairseq
## To reproduce
I use the **RobertaTokenizerFast** and it seems an arg name mismatch.
Steps to reproduce the behavior:
1. self.tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', cache_dir=args.cache_dir)
In transformers.tokenization_gpt2.py L376 it is:
`ByteLevelBPETokenizer(
vocab_file=vocab_file,
merges_file=merges_file,
add_prefix_space=add_prefix_space,
trim_offsets=trim_offsets,
)`
But in tokenizers.implementations.ByteLevelBPETokenizer it is expected to be `vocab`.
## Expected behavior
` File "/zfs1/hdaqing/rum20/kp/fairseq-kpg/fairseq/data/encoders/hf_bpe.py", line 31, in __init__
self.tokenizer = RobertaTokenizerFast.from_pretrained(args.pretrained_model, cache_dir=args.cache_dir)
File "/ihome/hdaqing/rum20/anaconda3/envs/kp/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1428, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/ihome/hdaqing/rum20/anaconda3/envs/kp/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1575, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/ihome/hdaqing/rum20/anaconda3/envs/kp/lib/python3.7/site-packages/transformers/tokenization_roberta.py", line 380, in __init__
**kwargs,
File "/ihome/hdaqing/rum20/anaconda3/envs/kp/lib/python3.7/site-packages/transformers/tokenization_gpt2.py", line 380, in __init__
trim_offsets=trim_offsets,
TypeError: __init__() got an unexpected keyword argument 'vocab_file'`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7916/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7915/comments | https://api.github.com/repos/huggingface/transformers/issues/7915/events | https://github.com/huggingface/transformers/pull/7915 | 724,924,442 | MDExOlB1bGxSZXF1ZXN0NTA2MjM0NjUw | 7,915 | [EncoderDecoder] Fix Typo | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | # What does this PR do?
Remove dead code
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7915/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7915",
"html_url": "https://github.com/huggingface/transformers/pull/7915",
"diff_url": "https://github.com/huggingface/transformers/pull/7915.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7915.patch",
"merged_at": 1603137762000
} |
https://api.github.com/repos/huggingface/transformers/issues/7914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7914/comments | https://api.github.com/repos/huggingface/transformers/issues/7914/events | https://github.com/huggingface/transformers/pull/7914 | 724,913,489 | MDExOlB1bGxSZXF1ZXN0NTA2MjI1NTUy | 7,914 | [flax] fix repo_check | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't understand why you have a problem. `make quality` runs fine for me on master (and the CI is also happy).\r\n\r\n**Edit:** Not saying that this is useless, I just want to understand why it fails for you and not for me :-)",
"Something is wrong on your side and CI. \r\n\r\n`check_repo.py` actually did its job correctly and reported problems that this PR fixes. Have a look at the fix and tell me if that checker shouldn't have caught it.\r\n\r\nSpecifically:\r\n\r\n* `test_modeling_flax_bert.py` and `test_modeling_flax_roberta.py` don't run common tests, and yet they weren't added to the ignore list\r\n* there is no `tests/test_modeling_flax_utils.py` so `modeling_flax_utils` has to be on the other ignore list\r\n\r\nI have no idea why you and CI don't reproduce these failures.",
"I'm in the middle of something else right now, but will investigate once I'm done. I'm unsure of why the CI hasn't been more angry with the jax PR myself. But I'd like to understand why it's not failing for me and the CI before taking the fix if that makes sense.",
"Absolutely, @sgugger. I will try to poke and see if the script behaves differently for some reason. I will report back if I find the culprit.\r\n\r\nIt should be safe for you to merge this as the lack of it may impacts other devs, I posted all the details why it's correct in here https://github.com/huggingface/transformers/pull/7914#issuecomment-712396143\r\n\r\nIt's your call.",
"Found part of the culprit - my py37 env doesn't catch the problem, whereas py38 does - now need to figure out whether it's some package difference or python itself. ",
"Oh, interesting! I'm indeed on Python 3.7.9",
"and so is CI\r\n\r\nUsing my mightly https://github.com/stas00/conda-tools/blob/master/conda-env-compare.pl - I should get to the root of it in a few minutes",
"I downgraded the the py38 env to py37 and it doesn't detect the problem anymore. Upgraded it back to py38 via conda and it fails now too! bummer - so it must have been some package. I need to dig more.\r\n\r\nI'd check `jax` and `flax` since the new flax code depends on it.",
"I'm confused by your report: by \"it fails now too!\" do you mean you don't see the problem anymore?\r\n\r\n**Edit:** Think I've found the issue. It's because the presence/absence of jax/flax will change what's in the __init__. And neither me nor the CI have it installed.",
"If my reasoning is correct, #7919 should be red. We can then add it to your fixes and merge all of this together.",
"Yes, this is the culprit. \r\n\r\nIf you `pip install jax jaxlib flax` you should be able to reproduce the problem.\r\n\r\nSo basically the validation script is as good as the preinstalled pre-requisites allow it to be, therefore to move forward to do proper testing we need to have a prerequisites set that contains **all possible external packages** used by the core library.\r\n\r\nPerhaps we need to change `setup.py` to add:\r\n\r\n`extras[\"all\"] = list all groups here`\r\n\r\nand have the `check_code_quality` CI job installing `pip install -e .[all].\r\n\r\nBut specifically for this issue https://github.com/huggingface/transformers/pull/7919 will do the trick. I merged it here as you suggested.",
"> I'm confused by your report: by \"it fails now too!\" do you mean you don't see the problem anymore?\r\n\r\nYeah, I was trying to figure out the difference and there were too many differences in installed modules, so I downgraded to py37, then upgraded to py38 and lost an environment that was good for the purpose of this issue. I eventually recovered it. I need to remember to back up conda envs before I try to mess with them :("
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Unless, this is actually a problem, this PR adds `modeling_flax_utils` to ignore list. otherwise currently it expects to have `tests/test_modeling_flax_utils.py` for this module.
It also adds the 2 new tests that don't run common tests to `TEST_FILES_WITH_NO_COMMON_TESTS`
For context please see: https://github.com/huggingface/transformers/pull/3722#issuecomment-712360415
now check_repo is happy.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7914/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7914",
"html_url": "https://github.com/huggingface/transformers/pull/7914",
"diff_url": "https://github.com/huggingface/transformers/pull/7914.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7914.patch",
"merged_at": 1603194940000
} |
https://api.github.com/repos/huggingface/transformers/issues/7913 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7913/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7913/comments | https://api.github.com/repos/huggingface/transformers/issues/7913/events | https://github.com/huggingface/transformers/issues/7913 | 724,911,905 | MDU6SXNzdWU3MjQ5MTE5MDU= | 7,913 | `add_prefix_space=True` option in the BPE tokenizer | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,608 | 1,608 | NONE | null | Hello,
I understand that when I add the `add_prefix_space=True` option in the BPE tokenizer statement, the tokenizer will add a space in the beginning of every sequence.
Is there some specific advantages of using the `add_prefix_space=True` option for BPE tokenizer (compared to when I don't use the option), given that all my sequences start without a space in the beginning.?
Thanks, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7913/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7912 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7912/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7912/comments | https://api.github.com/repos/huggingface/transformers/issues/7912/events | https://github.com/huggingface/transformers/issues/7912 | 724,880,522 | MDU6SXNzdWU3MjQ4ODA1MjI= | 7,912 | run_tf_text_classification.py giving "ValueError: too many values to unpack" | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey, I am getting the same error. \r\n\r\nI am using a three column CSV file which looks like this,\r\n\r\ndata.csv\r\nlabel,sent1,sent2\r\n0,he,so\r\n1,yes,why\r\n\r\nAny help would be appreciated. ",
"The error is caused by the following function\r\n`\r\ntransformed_ds[k] = ds[k].map(\r\n lambda example: tokenizer.batch_encode_plus(\r\n (example[features_name[0]], example[features_name[1]]),\r\n truncation=True,\r\n max_length=max_seq_length,\r\n padding=\"max_length\",\r\n ),\r\n batched=True,\r\n )\r\n`\r\nWhen I set `batched=False`, it could pass; however, another error arises. Any idea? @jplu",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,603 | 1,614 | 1,614 | CONTRIBUTOR | null | I am trying to run this script for token classification
https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_text_classification.py
Accoring to the instructions here
https://github.com/huggingface/transformers/tree/master/examples/text-classification
I formatted the data according> to the instructions
>the CSV files must have a header corresponding to the column names and not more than three columns: one column for the id, one column for the text and another column for a second piece of text in case of an entailment classification for example.
However, I am getting this error
> Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-57112360018dd326/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4. Subsequent calls will reuse this data.
> 10/19/2020 18:03:50 - INFO - filelock - Lock 140266200732560 released on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_csv_default-57112360018dd326_0.0.0_49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4.lock
> 0% 0/5 [00:00<?, ?ba/s]Traceback (most recent call last):
> File "/content/transformers/examples/text-classification/run_tf_text_classification.py", line 292, in <module>
> main()
> File "/content/transformers/examples/text-classification/run_tf_text_classification.py", line 231, in main
> max_seq_length=data_args.max_seq_length,
> File "/content/transformers/examples/text-classification/run_tf_text_classification.py", line 68, in get_tfds
> batched=True,
> File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1256, in map
> update_data=update_data,
> File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 156, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 163, in wrapper
> out = func(self, *args, **kwargs)
> File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1517, in _map_single
> batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset
> File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1435, in apply_function_on_filtered_inputs
> function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
> File "/content/transformers/examples/text-classification/run_tf_text_classification.py", line 66, in <lambda>
> padding="max_length",
> File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2323, in batch_encode_plus
> **kwargs,
> File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 560, in _batch_encode_plus
> ids, pair_ids = ids_or_pair_ids
> ValueError: too many values to unpack (expected 2)
> 0% 0/5 [00:00<?, ?ba/s]
It looks like the issue may be the script itself. I was having a previous issue running the script, and it looks like it was due to the datasets library
https://github.com/huggingface/datasets/issues/705#event-3839135529
It looks like the error is now with the script, or possibly the tokenizer. It sort of looks like the training wants only two types inputs, but is being passed all of the inputs from `batch_encode_plus`, which may be more than two (token id type, attention id type, segment id type, etc)
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: colab
- Python version: version 3, colab default
- PyTorch version (GPU?): colab default
- Tensorflow version (GPU?): colab default
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
I'm not sure, most likely the bug seems to be due to the example script itself, but could be the dataset, or tokenizer.
## Information
Model I am using: Bert, specifically scibert
The problem arises when using:
* [x ] the official example scripts: (give details below)
The tasks I am working on is:
* [x ] my own task or dataset: I am working with the chemprot dataset, for token classification. I following the instructions to have the data in a csv file, with two columns (one for label, another for text), and headers.
## To reproduce
Here is a colab notebook of the issue.
https://colab.research.google.com/drive/1r3XCKYA8RBtfYmU2jqHVJT-uTt1ii04S?usp=sharing
## Expected behavior
Should train without error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7912/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7911 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7911/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7911/comments | https://api.github.com/repos/huggingface/transformers/issues/7911/events | https://github.com/huggingface/transformers/pull/7911 | 724,842,275 | MDExOlB1bGxSZXF1ZXN0NTA2MTY1MTUw | 7,911 | [Docstring] fix t5 training docstring | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes T5 docstring according to recent tokenizer changes.
Fixes #7904
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7911/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7911",
"html_url": "https://github.com/huggingface/transformers/pull/7911",
"diff_url": "https://github.com/huggingface/transformers/pull/7911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7911.patch",
"merged_at": 1603136988000
} |
https://api.github.com/repos/huggingface/transformers/issues/7910 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7910/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7910/comments | https://api.github.com/repos/huggingface/transformers/issues/7910/events | https://github.com/huggingface/transformers/issues/7910 | 724,771,520 | MDU6SXNzdWU3MjQ3NzE1MjA= | 7,910 | [T5] Ignore sentinel indices for unsupervised denoising / masking objective? | {
"login": "ahoho",
"id": 13487685,
"node_id": "MDQ6VXNlcjEzNDg3Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/13487685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahoho",
"html_url": "https://github.com/ahoho",
"followers_url": "https://api.github.com/users/ahoho/followers",
"following_url": "https://api.github.com/users/ahoho/following{/other_user}",
"gists_url": "https://api.github.com/users/ahoho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahoho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahoho/subscriptions",
"organizations_url": "https://api.github.com/users/ahoho/orgs",
"repos_url": "https://api.github.com/users/ahoho/repos",
"events_url": "https://api.github.com/users/ahoho/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahoho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @ahoho - good question! \r\n\r\nI'm pretty confident that you should mask all sentinel tokens (with -100) and only compute the loss the \"real\" labels being \"cute dog\", \"the\" and \"</s>\". \r\n\r\nAlso they are definitely not automatically ignored as is done for the pad_token_id in `examples/seq2seq`\r\n\r\nI could not find a more detailed explanation in the paper - so maybe @craffel could take a quick look as well and confirm (hope it's fine to tag you here Colin)",
"No need to treat the sentinel tokens specially (masking out their loss or otherwise). The model is trained to output both the sentinel tokens and the filled-in blanks."
] | 1,603 | 1,643 | 1,603 | NONE | null | The [docs](https://huggingface.co/transformers/model_doc/t5.html#training) state that the masked language modeling objective is simply
```
input_ids = tokenizer.encode('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt')
labels = tokenizer.encode('<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>', return_tensors='pt')
model(input_ids=input_ids, labels=labels)
```
I was wondering if I need to manually set the `additional_special_tokens_ids` (corresponding to the `<extra_id_#>` sentinels) in the `labels` to `-100` during training so that they are ignored by the loss, as I believe would be the case for the `[MASK]` tokens in BERT? It seems that at least the `pad_token_id` is ignored in [`examples/seq2seq`](https://github.com/huggingface/transformers/blob/a09fe140c1c059baf05c4f97e5b4e83c719608db/examples/seq2seq/finetune.py#L153), but it's not clear if this ought to be true for the sentinels as well. My suspicion is _no_, but since there's no canonical MLM code for T5, I figured it was worth checking.
(I asked this in the forums and in a somewhat related issue, but was recommended to post here & tag @patrickvonplaten / @thomwolf) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7910/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7909 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7909/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7909/comments | https://api.github.com/repos/huggingface/transformers/issues/7909/events | https://github.com/huggingface/transformers/issues/7909 | 724,749,987 | MDU6SXNzdWU3MjQ3NDk5ODc= | 7,909 | pegasus/cnn_dm 12-2 distillation performing poorly | {
"login": "karthikgali",
"id": 12197213,
"node_id": "MDQ6VXNlcjEyMTk3MjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/12197213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karthikgali",
"html_url": "https://github.com/karthikgali",
"followers_url": "https://api.github.com/users/karthikgali/followers",
"following_url": "https://api.github.com/users/karthikgali/following{/other_user}",
"gists_url": "https://api.github.com/users/karthikgali/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karthikgali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karthikgali/subscriptions",
"organizations_url": "https://api.github.com/users/karthikgali/orgs",
"repos_url": "https://api.github.com/users/karthikgali/repos",
"events_url": "https://api.github.com/users/karthikgali/events{/privacy}",
"received_events_url": "https://api.github.com/users/karthikgali/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2368374212,
"node_id": "MDU6TGFiZWwyMzY4Mzc0MjEy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/pegasus",
"name": "pegasus",
"color": "1f76a8",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"I would also try without `--fp16`",
"Sure @sshleifer I will try without `--fp16` and update the results here. Thanks for looking into this.",
"Hi @sshleifer \r\n\r\nI ran the below command for distillation (without --fp16 as you suggested):\r\n`python finetune.py --learning_rate=3e-5 --do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 6 --freeze_embeds --data_dir ./cnn_dm/ --max_target_length 142 --val_max_target_length=142 --train_batch_size=1 --eval_batch_size=1 --gradient_accumulation_steps=256 --model_name_or_path sshleifer/student_pegasus_cnn_12_2 --tokenizer_name google/pegasus-cnn_dailymail --warmup_steps 500 --output_dir distilpegasus-cnn-12-2 --gpus 1 --num_workers=0 --adafactor --freeze_encoder --task summarization --dropout 0.1 --attention_dropout 0.1 --label_smoothing 0.1 `\r\n\r\nHowever, the rouge scores are not improving even after 1 epoch\r\n`{\r\n \"val\": [\r\n {\r\n \"val_avg_loss\": 940.3131713867188,\r\n \"val_avg_rouge1\": 0.0,\r\n \"val_avg_rouge2\": 0.0,\r\n \"val_avg_rougeL\": 0.0,\r\n \"val_avg_rougeLsum\": 0.0,\r\n \"val_avg_gen_time\": 1.9830520153045654,\r\n \"val_avg_gen_len\": 128.0,\r\n \"step_count\": 1\r\n },\r\n {\r\n \"val_avg_loss\": 457.8860168457031,\r\n \"val_avg_rouge1\": 0.8307167999999999,\r\n \"val_avg_rouge2\": 0.0106524,\r\n \"val_avg_rougeL\": 0.8102172,\r\n \"val_avg_rougeLsum\": 0.8177266,\r\n \"val_avg_gen_time\": 1.9989106116294861,\r\n \"val_avg_gen_len\": 128.0,\r\n \"step_count\": 2\r\n },\r\n {\r\n \"val_avg_loss\": 297.9767761230469,\r\n \"val_avg_rouge1\": 2.7392655999999995,\r\n \"val_avg_rouge2\": 0.08615479999999999,\r\n \"val_avg_rougeL\": 2.4773216,\r\n \"val_avg_rougeLsum\": 2.6349664,\r\n \"val_avg_gen_time\": 1.7901806454658509,\r\n \"val_avg_gen_len\": 93.732,\r\n \"step_count\": 3\r\n },\r\n {\r\n \"val_avg_loss\": 272.0320129394531,\r\n \"val_avg_rouge1\": 4.0338778,\r\n \"val_avg_rouge2\": 0.2913826,\r\n \"val_avg_rougeL\": 3.4839722,\r\n \"val_avg_rougeLsum\": 3.7919970000000003,\r\n \"val_avg_gen_time\": 1.4304678964614868,\r\n \"val_avg_gen_len\": 47.67,\r\n \"step_count\": 4\r\n },\r\n {\r\n \"val_avg_loss\": 259.57611083984375,\r\n \"val_avg_rouge1\": 7.9237036000000005,\r\n \"val_avg_rouge2\": 0.7740864000000001,\r\n \"val_avg_rougeL\": 6.5176862,\r\n \"val_avg_rougeLsum\": 7.265688,\r\n \"val_avg_gen_time\": 1.3813148093223573,\r\n \"val_avg_gen_len\": 37.046,\r\n \"step_count\": 5\r\n }\r\n ]\r\n}\r\n`\r\n\r\nAfter 1 epoch, rogue2 score is 0.77. Could you please help if I am doing something wrong here?\r\n\r\nThanks in advance for your help.\r\n\r\nRegards,\r\nKarthik\r\n\r\n",
"+ Note that scores are improving, just very slowly.\r\n+ I have not had good luck with `sshleifer/student_pegasus_cnn_12_2`, I'd try to make your own student with a full encoder and a 4+ layer decoder starting. Using, for example:\r\n\r\n```bash\r\npython make_student.py sshleifer/pegasus-cnn-ft-v2 -save_path student_peg_cnn_16_4 -e 16 -d 4\r\n```\r\n\r\nHere is the [wandb log](https://wandb.ai/sshleifer/pegasus_ft/runs/32ov7btf?workspace=user-) for a run that used `student_peg_cnn_16_4` \r\n\r\n\r\n\r\n\r\nI started at `--max_target_length 56` and then finetuned more with `--max_target_length 142`. That log is the first run. The second run is [here](https://wandb.ai/sshleifer/pegasus_ft/runs/2z1t4r0t?workspace=user-)\r\n\r\n\r\nFWIW, XSUM trains much faster!",
"Thanks @sshleifer for your inputs.\r\n\r\nI am using this model (https://huggingface.co/sshleifer/distill-pegasus-cnn-16-4) which has 16 encoders and 4 decoders. I am trying to reduce the inference runtime of the model - for this reason, I am trying distillation with lesser encoders and decoders.\r\n\r\nCould you please suggest if I should try something different to reduce the inference runtime? \r\n\r\nRegards,\r\nKarthik\r\n",
"Try generating with the 16/4 model and `num_beams=2`.\r\n",
"Thanks @sshleifer for your suggestion. This improved the runtime. Please let me know if you have more such ideas.\r\n",
"Besides that, all that's easy is to make your input documents shorter, or make your generations shorter (with min_length, max_length).\r\n"
] | 1,603 | 1,603 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-5.4.0-1028-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@sshleifer
## Information
I am trying to distil the pegasus model to reduce the runtime and memory requirements. I am following **No Teacher Distillation** approach. However, the model generates bad quality text.
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name): CNN
* [ ] my own task or dataset: (give details below)
## To reproduce
I have trained the model using below command:
**Download data:**
wget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz
tar -xzvf cnn_dm_v2.tgz # empty lines removed
mv cnn_cln cnn_dm
**Command to train:**
python finetune.py --learning_rate=3e-5 --do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 6 --freeze_encoder --freeze_embeds --data_dir ./cnn_dm/ --max_target_length 142 --val_max_target_length=142 --train_batch_size=1 --eval_batch_size=1 --gradient_accumulation_steps=256 --model_name_or_path sshleifer/student_pegasus_cnn_12_2 --tokenizer_name google/pegasus-cnn_dailymail --warmup_steps 500 --output_dir distilpegasus-cnn-12-2 --gpus 1 --adafactor --num_workers=0 --fp16_opt_level=O1 --fp16
**Inference code:**
```
from transformers import PegasusForConditionalGeneration, PegasusTokenizer, PegasusConfig
import torch
PEGASUS_MODEL = '/home/ubuntu/finetune/transformers/examples/seq2seq/distilpegasus-cnn-12-2/best_tfmr'
PEGASUS_TOKENIZER = 'google/pegasus-cnn_dailymail'
class PegasusSummarizer:
def __init__(self):
self.torch_device = 'cpu'
self.tokenizer = PegasusTokenizer.from_pretrained(PEGASUS_TOKENIZER)
self.model = PegasusForConditionalGeneration.from_pretrained(PEGASUS_MODEL).to(self.torch_device)
def summarize(self, text):
src_text = text
batch = self.tokenizer.prepare_seq2seq_batch([src_text],truncation=True,padding='longest').to(self.torch_device)
translated = self.model.generate(**batch)
tgt_text = self.tokenizer.batch_decode(translated, skip_special_tokens=True)
return tgt_text
summarizer = PegasusSummarizer()
print(summarizer.summarize('''(CNN)For the first time in eight years, a TV legend returned to doing what he does best. Contestants told to "come on down!" on the April 1 edition of "The Price Is Right" encountered not host Drew Carey but another familiar face in charge of the proceedings. Instead, there was Bob Barker, who hosted the TV game show for 35 years before stepping down in 2007. Looking spry at 91, Barker handled the first price-guessing game of the show, the classic "Lucky Seven," before turning hosting duties over to Carey, who finished up. Despite being away from the show for most of the past eight years, Barker didn't seem to miss a beat.'''))
```
**Output:** ['"It\'s time for the first time in a five-year anniversary of the show.']
**Output of google/pegasus-cnn_dailymail model**:['Barker hosted "The Price Is Right" for 35 years.<n>He stepped down in 2007.']
test_results.txt output:
src_pad_frac = tensor(0., device='cuda:0')
src_pad_tok = tensor(0, device='cuda:0')
step_count = 26
test_avg_gen_len = 48.63716275021758
test_avg_gen_time = 1.3503953615824382
test_avg_loss = 3.6937525272369385
test_avg_rouge1 = 19.983542428198433
test_avg_rouge2 = 4.130034786771105
test_avg_rougeL = 14.352700217580503
test_avg_rougeLsum = 18.460456248912102
test_loss = tensor(3.6938, device='cuda:0')
test_rouge2 = tensor(4.1300, device='cuda:0')
tpb = tensor(511, device='cuda:0')
val_avg_gen_len = 50.144
val_avg_gen_time = 1.513235685825348
val_avg_loss = 3.77506422996521
val_avg_rouge1 = 16.9548154
val_avg_rouge2 = 3.1666046
val_avg_rougeL = 12.980990400000001
val_avg_rougeLsum = 15.404284
val_loss = tensor(3.7751, device='cuda:0')
val_rouge2 = tensor(3.1666, device='cuda:0')
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect the output to be much cleaner and higher Rouge score. Any help in this regard would be of great help.
I am trying to retrain the model by removing **--freeze_encoder**.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7909/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7908 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7908/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7908/comments | https://api.github.com/repos/huggingface/transformers/issues/7908/events | https://github.com/huggingface/transformers/issues/7908 | 724,725,973 | MDU6SXNzdWU3MjQ3MjU5NzM= | 7,908 | [Model] M2M-100 Multilingual machine translation | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This model is very big. Is there a good way to prune it?",
"Moving to #8054 which is a duplicate (that I created)!",
"> This model is very big. Is there a good way to prune it?\r\n\r\n@Bachstelze Did you find any ways to distill or prune such a large model?",
"@robotsp \r\nThere is a smaller version: https://huggingface.co/alirezamsh/small100\r\n[SMaLL-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages](https://aclanthology.org/2022.emnlp-main.571.pdf)"
] | 1,603 | 1,676 | 1,604 | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
Facebook AI is introducing,
M2M-100
the first multilingual machine translation (MMT) model that translates between any pair of 100 languages without relying on English data.
<!-- Important information -->
## Open source status
* [x] the model implementation is available: (give details) https://github.com/pytorch/fairseq/tree/master/examples/m2m_100?fbclid=IwAR2Oqew-PAwZpTmHMrq_yiXN2dwdzzbTMZ-4HfbNKfdoZ_M5TpQiPY3dYFo
* [x] the model weights are available: (give details) https://dl.fbaipublicfiles.com/m2m_100/12b_last_checkpoint.pt
* [ ] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7908/reactions",
"total_count": 16,
"+1": 16,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7908/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7907 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7907/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7907/comments | https://api.github.com/repos/huggingface/transformers/issues/7907/events | https://github.com/huggingface/transformers/issues/7907 | 724,706,830 | MDU6SXNzdWU3MjQ3MDY4MzA= | 7,907 | Reproducing Bart Xsum from Bart Large | {
"login": "swethmandava",
"id": 17828952,
"node_id": "MDQ6VXNlcjE3ODI4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/17828952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swethmandava",
"html_url": "https://github.com/swethmandava",
"followers_url": "https://api.github.com/users/swethmandava/followers",
"following_url": "https://api.github.com/users/swethmandava/following{/other_user}",
"gists_url": "https://api.github.com/users/swethmandava/gists{/gist_id}",
"starred_url": "https://api.github.com/users/swethmandava/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/swethmandava/subscriptions",
"organizations_url": "https://api.github.com/users/swethmandava/orgs",
"repos_url": "https://api.github.com/users/swethmandava/repos",
"events_url": "https://api.github.com/users/swethmandava/events{/privacy}",
"received_events_url": "https://api.github.com/users/swethmandava/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't know the answer to this question. Your numbers are close enough that all I can suggest is to either try fairseq's [command](https://github.com/pytorch/fairseq/blob/master/examples/bart/README.summarization.md#4-fine-tuning-on-cnn-dm-summarization-task) or look at differences between our command and fairseq.\r\n",
"> Summarization: @sshleifer Bart: @sshleifer\r\n> \r\n> ## Information\r\n> I'm trying to finetune bart large for Xsum and unable to reproduce the results from the paper.\r\n> \r\n> When I try eval with facebook/bart-large-xsum, I get R1=45.3595, RLSum=37.1717 so I assume my eval script is working ok. For finetuning bart large, I use the same config as bart-large-xsum with vocab size=50265 to enable starting from bart-large. However, I am unable to reach the same scores. The best I have is R1=45.4188, RLSum=36.6986 with LR=1.2e-4, gbs=128 and --max_target_length=60 --max_source_length=1024 --val_check_interval 0.1 --val_max_target_length=60 --warmup_steps 50 --max_steps 5000.\r\n> \r\n> How can I reproduce the results?\r\n\r\nHi @swethmandava . I'm trying to reproduce the result by Transformres. Would you mind sharing your fine-tuning script?"
] | 1,603 | 1,651 | 1,606 | CONTRIBUTOR | null | Summarization: @sshleifer
Bart: @sshleifer
## Information
I'm trying to finetune bart large for Xsum and unable to reproduce the results from the paper.
When I try eval with facebook/bart-large-xsum, I get R1=45.3595, RLSum=37.1717 so I assume my eval script is working ok. For finetuning bart large, I use the same config as bart-large-xsum with vocab size=50265 to enable starting from bart-large. However, I am unable to reach the same scores. The best I have is R1=45.4188, RLSum=36.6986 with LR=1.2e-4, gbs=128 and --max_target_length=60 --max_source_length=1024 --val_check_interval 0.1 --val_max_target_length=60 --warmup_steps 50 --max_steps 5000.
How can I reproduce the results?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7907/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7906 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7906/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7906/comments | https://api.github.com/repos/huggingface/transformers/issues/7906/events | https://github.com/huggingface/transformers/pull/7906 | 724,692,102 | MDExOlB1bGxSZXF1ZXN0NTA2MDM5MDM3 | 7,906 | labels and decoder_input_ids to Glossary | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | Completes the glossary with entries for `labels` and `decoder_input_ids`.
Closes https://github.com/huggingface/transformers/issues/7865
Pinging @sshleifer and @patrickvonplaten for advice regarding the `decoder_input_ids`, @sgugger for docs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7906/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7906",
"html_url": "https://github.com/huggingface/transformers/pull/7906",
"diff_url": "https://github.com/huggingface/transformers/pull/7906.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7906.patch",
"merged_at": 1603201848000
} |
https://api.github.com/repos/huggingface/transformers/issues/7905 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7905/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7905/comments | https://api.github.com/repos/huggingface/transformers/issues/7905/events | https://github.com/huggingface/transformers/issues/7905 | 724,678,014 | MDU6SXNzdWU3MjQ2NzgwMTQ= | 7,905 | [RAG] How to extract generated strings from `RetrievAugLMMarginOutput` | {
"login": "lalitpagaria",
"id": 19303690,
"node_id": "MDQ6VXNlcjE5MzAzNjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/19303690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lalitpagaria",
"html_url": "https://github.com/lalitpagaria",
"followers_url": "https://api.github.com/users/lalitpagaria/followers",
"following_url": "https://api.github.com/users/lalitpagaria/following{/other_user}",
"gists_url": "https://api.github.com/users/lalitpagaria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lalitpagaria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lalitpagaria/subscriptions",
"organizations_url": "https://api.github.com/users/lalitpagaria/orgs",
"repos_url": "https://api.github.com/users/lalitpagaria/repos",
"events_url": "https://api.github.com/users/lalitpagaria/events{/privacy}",
"received_events_url": "https://api.github.com/users/lalitpagaria/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten can you please help",
"Hey @lalitpagaria,\r\n\r\nUsing embedding, retrieval and generation separately for RagSequence is not yet available sadly.\r\nYou should take a look into the `generate()` function of `RagSequenceForGeneration` for more detail on how to run it separately yourself.",
"Thanks @patrickvonplaten . I think we (haystack) will wait for implementation in transformers and use only RagToken for now.\r\nPlease let me know should I keep this open in case you plan to add functionality in the future? or close this.\r\n\r\ncc: @tholor ",
"Closing"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
How to extract generated strings from `RetrievAugLMMarginOutput`?
## Details
<!-- Description of your issue -->
When using `RagSequenceForGeneration` and `retriever` separately we can't use `model.generate` (refer #7829). And calling `model.__call__` directly return `RetrievAugLMMarginOutput`. I not able to to find way to extract `generated_ids` from it.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7905/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7904 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7904/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7904/comments | https://api.github.com/repos/huggingface/transformers/issues/7904/events | https://github.com/huggingface/transformers/issues/7904 | 724,667,347 | MDU6SXNzdWU3MjQ2NjczNDc= | 7,904 | T5 Docs training example has shifted labels | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey Sam, it looks like the labels in the example you quoted are not shifted - can you be more specific about you think the labels are shifted?",
"yes, I think the `labels` should be unshifted here (i.e `labels` should be same as `input_ids`) since `shift_right` takes care of preparing shifted `decoder_input_ids`.",
"@craffel I assumed the labels were shifted because:\r\n+ Original: `The cute dog walks in the park`\r\n+ Input_ids: `The <extra_id_0> walks in <extra_id_1> park`\r\n+ Labels: `<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>` \r\n\r\n`input_ids` starts with unmasked \"The\", whereas labels starts with a sentinel token. ",
"I'm still not following - are you think the sentinel token `<extra_id_0>` is the same as the start-of-sequence token? They are different tokens.",
"@sshleifer - I don't really understand the problem here either. In the example the `labels` are provided as:\r\n\r\n```python\r\n<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>\r\n```\r\nwhich means that `decoder_input_ids` will be automatically created as:\r\n```python\r\n<s> <extra_id_0> cute dog <extra_id_1> the <extra_id_2>\r\n```\r\n\r\n=> This looks correct to me\r\n",
"+1 to Patrick's take",
"Aah, yes, for t5 we just predict the masked out spans, unlike BART. So this looks correct. ",
"In the docs, the `</s>` is omitted from `input_ids`, but will be silently added due to #5866. Is this also the correct behavior?",
"@ahoho => good point - I will update the docs to reflect this behavior",
"@patrickvonplaten, thanks! Does this mean the docs were incorrect before? I guess my question is, for the denoising training, is it correct to append the `</s>` token to the `input_ids` (not `labels`) or isn't it?",
"`</s>` should be appended IMO -> It's just that this is done automatically since #5866 as you mentioned above :-) "
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/t5.rst#L42
Here is that link quoted:
#### Unsupervised denoising training
In teacher-forcing style, the target sequence is then appended by the EOS token and corresponds to the `labels`.
In this setup spans of the input sequence are masked by so-called sentinel tokens (*a.k.a* unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the *real* masked tokens.
Each sentinel token represents a unique mask token for this sentence and should start with `<extra_id_0>`, `<extra_id_1>`, ... up to `<extra_id_99>`. As a default, 100 sentinel tokens are available in `transformers.T5Tokenizer`.
For instance, the sentence "The cute dog walks in the park" with the masks put on "cute dog" and "the" should be processed as follows:
```python
input_ids = tokenizer.encode('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt')
labels = tokenizer.encode('<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>', return_tensors='pt')
# the forward function automatically creates the correct decoder_input_ids
model(input_ids=input_ids, labels=labels)
```
1) Shouldn't the labels be unshifted, given that `decoder_input_ids = shift_right(labels)` @patrickvonplaten @patil-suraj ?
2) @craffel does this look correct to you?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7904/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7903 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7903/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7903/comments | https://api.github.com/repos/huggingface/transformers/issues/7903/events | https://github.com/huggingface/transformers/pull/7903 | 724,625,838 | MDExOlB1bGxSZXF1ZXN0NTA1OTgwNjE4 | 7,903 | Modelling Encoder-Decoder | Error :- `decoder_config` used before intialisation | {
"login": "ayubSubhaniya",
"id": 20911334,
"node_id": "MDQ6VXNlcjIwOTExMzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayubSubhaniya",
"html_url": "https://github.com/ayubSubhaniya",
"followers_url": "https://api.github.com/users/ayubSubhaniya/followers",
"following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}",
"gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions",
"organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs",
"repos_url": "https://api.github.com/users/ayubSubhaniya/repos",
"events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten @sgugger, please review",
"Great catch @ayubSubhaniya !"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Getting error when sending `decoder_config` as a parameter while initializing an encoder-decoder model from pretrained.
# What does this PR do?
fixes "UnboundLocalError: local variable 'decoder_config' referenced before assignment"
## Who can review?
@patrickvonplaten @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7903/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7903",
"html_url": "https://github.com/huggingface/transformers/pull/7903",
"diff_url": "https://github.com/huggingface/transformers/pull/7903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7903.patch",
"merged_at": 1603129729000
} |
https://api.github.com/repos/huggingface/transformers/issues/7902 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7902/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7902/comments | https://api.github.com/repos/huggingface/transformers/issues/7902/events | https://github.com/huggingface/transformers/pull/7902 | 724,596,084 | MDExOlB1bGxSZXF1ZXN0NTA1OTU1NjA1 | 7,902 | change TokenClassificationTask class methods to static methods | {
"login": "donchev7",
"id": 11960967,
"node_id": "MDQ6VXNlcjExOTYwOTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/11960967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donchev7",
"html_url": "https://github.com/donchev7",
"followers_url": "https://api.github.com/users/donchev7/followers",
"following_url": "https://api.github.com/users/donchev7/following{/other_user}",
"gists_url": "https://api.github.com/users/donchev7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donchev7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donchev7/subscriptions",
"organizations_url": "https://api.github.com/users/donchev7/orgs",
"repos_url": "https://api.github.com/users/donchev7/repos",
"events_url": "https://api.github.com/users/donchev7/events{/privacy}",
"received_events_url": "https://api.github.com/users/donchev7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"But why doesn't this PR trigger a CI test? @LysandreJik ",
"I have no idea, this is the second time it happens. I sent an empty commit on your branch to trigger the build @donchev7.",
"Pinging @stefan-it for review."
] | 1,603 | 1,604 | 1,604 | CONTRIBUTOR | null | Since we do not require self in the class methods of TokenClassificationTask we should probably switch to static methods. Also, since the class TokenClassificationTask does not contain a constructor it is currently unusable as is. By switching to static methods this fixes the issue of having to document the intent of the broken class.
Also, since the get_labels and read_examples_from_file methods are ought to be implemented. Static method definitions are unchanged even after inheritance, which means that it can be overridden, similar to other class methods.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7902/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7902",
"html_url": "https://github.com/huggingface/transformers/pull/7902",
"diff_url": "https://github.com/huggingface/transformers/pull/7902.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7902.patch",
"merged_at": 1604587110000
} |
https://api.github.com/repos/huggingface/transformers/issues/7901 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7901/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7901/comments | https://api.github.com/repos/huggingface/transformers/issues/7901/events | https://github.com/huggingface/transformers/issues/7901 | 724,539,472 | MDU6SXNzdWU3MjQ1Mzk0NzI= | 7,901 | GPT2Tokenizer strips spaces surrounding special tokens | {
"login": "jantrienes",
"id": 12009072,
"node_id": "MDQ6VXNlcjEyMDA5MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/12009072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jantrienes",
"html_url": "https://github.com/jantrienes",
"followers_url": "https://api.github.com/users/jantrienes/followers",
"following_url": "https://api.github.com/users/jantrienes/following{/other_user}",
"gists_url": "https://api.github.com/users/jantrienes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jantrienes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jantrienes/subscriptions",
"organizations_url": "https://api.github.com/users/jantrienes/orgs",
"repos_url": "https://api.github.com/users/jantrienes/repos",
"events_url": "https://api.github.com/users/jantrienes/events{/privacy}",
"received_events_url": "https://api.github.com/users/jantrienes/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"The issue is still present.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This issue is still present in `transformers==4.5.1`.",
"Hey @jantrienes, sorry for getting back so late to you. The issue here is on our side - the added tokens work differently between the slow and fast tokenizers and we'll have to patch that. In the meantime, you can use `AddedToken`s instead of the strings to define the strategy w.r.t the stripping of whitespace. This should behave normally:\r\n\r\n```py\r\nimport os\r\nos.makedirs('models/', exist_ok=True)\r\n\r\nfrom transformers import GPT2Tokenizer\r\nfrom tokenizers import ByteLevelBPETokenizer, AddedToken\r\n\r\n\r\nopen('train.txt', 'w').write('Training data including a <special> token.')\r\n\r\nspecial_tokens = [AddedToken('<special>')]\r\n\r\nbpe_tokenizer = ByteLevelBPETokenizer()\r\nbpe_tokenizer.train(\r\n files=['train.txt'],\r\n special_tokens=special_tokens\r\n)\r\nbpe_tokenizer.save_model('models/')\r\n\r\ngpt2_tokenizer = GPT2Tokenizer.from_pretrained(\r\n 'models/',\r\n)\r\ngpt2_tokenizer.add_special_tokens({\"additional_special_tokens\": special_tokens})\r\n\r\ntext = 'A <special> token.'\r\n\r\nprint(bpe_tokenizer.encode(text).tokens)\r\nprint(gpt2_tokenizer.tokenize(text))\r\nassert bpe_tokenizer.encode(text).tokens == ['A', 'Ġ', '<special>', 'Ġ', 't', 'o', 'k', 'e', 'n', '.']\r\nassert gpt2_tokenizer.tokenize(text) == ['A', 'Ġ', '<special>', 'Ġ', 't', 'o', 'k', 'e', 'n', '.'\r\n```\r\n\r\nNote how the special tokens are defined using `AddedToken` instead of a string. Unfortunately, these cannot be passed during initialization as you've done, but I'm fixing this in https://github.com/huggingface/transformers/pull/11325.\r\n\r\nYou can control the `AddedToken`'s behavior relative to whitespace using the `rstrip` and `lstrip` keyword arguments.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This issue is present for \"non-special\" tokens in transformer 4.12.3. The code explicitly strips spaces from the surrounding tokens. It does NOT look for the `AddedToken` class to get the behavior so there is no way to change this.\r\n\r\nSee `tokenization_utils.py::tokenizer()` line 517:\r\n```\r\n else:\r\n # We strip left and right by default\r\n if right:\r\n tokens[i + 1] = right.lstrip()\r\n if left:\r\n tokens[i - 1] = left.rstrip()\r\n```\r\nThis leads to incorrect behavior unless the added token is in the middle of a longer word.\r\n\r\nI should point out that the situation is more complicated than just changing the behavior above. Even if you comment out those lines (at least with the T5Tokenizer) there are still no spaces around the added tokens because, just below those lines, the logic to concatenate all the tokens is also stripping the spaces as well (see `tokenized_text.extend(self._tokenize(token))`).\r\n\r\nI'm not sure what the right solution is but this is in the base class so it's happening for a number of tokenizers including T5 and bart. It is also happening for T5Fast.",
"@SaulLu, would you like to take a look at this?",
"Glad to look at this issue in more detail! I'll dive in tomorrow :monocle_face: ",
"\r\n> I'm not sure what the right solution is but this is in the base class so it's happening for a number of tokenizers including T5 and bart. It is also happening for T5Fast.\r\n\r\nYeah, I'm noticing spaces around my added tokens when creating a custom Bart tokenizer. Did this get resolved for you? Is there a way to work around it? "
] | 1,603 | 1,654 | 1,621 | NONE | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@mfuntowicz, this seems to be an issue related to tokenization. So I hope you are the right person to ping here.
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Minimal working example:
```py
import os
os.makedirs('models/', exist_ok=True)
from transformers import GPT2Tokenizer
from tokenizers import ByteLevelBPETokenizer
open('train.txt', 'w').write('Training data including a <special> token.')
special_tokens = ['<special>']
bpe_tokenizer = ByteLevelBPETokenizer()
bpe_tokenizer.train(
files=['train.txt'],
special_tokens=special_tokens
)
bpe_tokenizer.save_model('models/')
gpt2_tokenizer = GPT2Tokenizer.from_pretrained(
'models/',
additional_special_tokens=special_tokens,
)
```
When encoding below text, the two tokenizers yield different outputs:
```py
>>> text = 'A <special> token.'
>>> bpe_tokenizer.encode(text).tokens
['A', 'Ġ', '<special>', 'Ġ', 't', 'o', 'k', 'e', 'n', '.']
>>> gpt2_tokenizer.tokenize(text)
['A', '<special>', 't', 'o', 'k', 'e', 'n', '.'] # <----- Note the missing space (`Ġ`) around `<special>`
```
## Expected behavior
I would expect that both tokenizers give the same output when encoding the sentence. Furthermore, because `GPT2Tokenizer` seems to remove the spaces surrounding the special token, the decode(encode()) does not return the original string.
```py
assert bpe_tokenizer.encode(text).tokens == ['A', 'Ġ', '<special>', 'Ġ', 't', 'o', 'k', 'e', 'n', '.']
assert gpt2_tokenizer.tokenize(text) == ['A', 'Ġ', '<special>', 'Ġ', 't', 'o', 'k', 'e', 'n', '.']
```
It is possible that I misunderstand the `GPT2Tokenizer` API. Please advise if I should pass `special_tokens` in a different way. Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7901/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7900 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7900/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7900/comments | https://api.github.com/repos/huggingface/transformers/issues/7900/events | https://github.com/huggingface/transformers/issues/7900 | 724,473,717 | MDU6SXNzdWU3MjQ0NzM3MTc= | 7,900 | example for passage re-ranking using bert-multilingual-passage-reranking-msmarco | {
"login": "vyaslkv",
"id": 33617789,
"node_id": "MDQ6VXNlcjMzNjE3Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/33617789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vyaslkv",
"html_url": "https://github.com/vyaslkv",
"followers_url": "https://api.github.com/users/vyaslkv/followers",
"following_url": "https://api.github.com/users/vyaslkv/following{/other_user}",
"gists_url": "https://api.github.com/users/vyaslkv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vyaslkv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyaslkv/subscriptions",
"organizations_url": "https://api.github.com/users/vyaslkv/orgs",
"repos_url": "https://api.github.com/users/vyaslkv/repos",
"events_url": "https://api.github.com/users/vyaslkv/events{/privacy}",
"received_events_url": "https://api.github.com/users/vyaslkv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,608 | 1,608 | NONE | null | can you give me a working example this
`from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco")
model = AutoModelForSequenceClassification.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco")
`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7900/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7900/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7899 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7899/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7899/comments | https://api.github.com/repos/huggingface/transformers/issues/7899/events | https://github.com/huggingface/transformers/pull/7899 | 724,459,086 | MDExOlB1bGxSZXF1ZXN0NTA1ODQxODAw | 7,899 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7899/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7899",
"html_url": "https://github.com/huggingface/transformers/pull/7899",
"diff_url": "https://github.com/huggingface/transformers/pull/7899.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7899.patch",
"merged_at": 1603283036000
} |
https://api.github.com/repos/huggingface/transformers/issues/7898 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7898/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7898/comments | https://api.github.com/repos/huggingface/transformers/issues/7898/events | https://github.com/huggingface/transformers/issues/7898 | 724,428,508 | MDU6SXNzdWU3MjQ0Mjg1MDg= | 7,898 | [EncoderDecoder] google/roberta2roberta_L-24_wikisplit | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Confirmed by author that BOS and EOS have to be added -> fixed in https://github.com/huggingface/transformers/commit/0724c0f3a2d302246d0bd0b7d2f721fa902dee1b."
] | 1,603 | 1,603 | 1,603 | MEMBER | null | It seems like `google/roberta2roberta_L-24_wikisplit` should be pre-processed differently than originally thought:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_wikisplit")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_wikisplit")
long_sentence = """Due to the hurricane, Lobsterfest has been canceled, making Bob very happy about it and he decides to open Bob 's Burgers for customers who were planning on going to Lobsterfest."""
input_ids = tokenizer(long_sentence, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Due Due hurricane, Lobsterfest has been canceled, making Bob very happy about it. He decides to open B
# ob's Burgers for customers who were planning on going to Lobsterfest.com.
```
yields a weird word duplication bug, whereas:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_wikisplit")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_wikisplit")
long_sentence = """Due to the hurricane, Lobsterfest has been canceled, making Bob very happy about it and he decides to open Bob 's Burgers for customers who were planning on going to Lobsterfest."""
input_ids = tokenizer(tokenizer.bos_token + long_sentence + tokenizer.eos_token, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Due to the hurricane, Lobsterfest has been canceled, making Bob very happy about it. He decides to open Bob's Burgers for customers who were planning on going to Lobsterfest.
```
yields good results.
Notice the difference of
`input_ids = tokenizer(tokenizer.bos_token + long_sentence + tokenizer.eos_token, return_tensors="pt").input_ids` between code sample 1 and 2.
@patrickvonplaten
Wait for author's (@shashiongithub) answer before changing code example.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7898/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7898/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7897 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7897/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7897/comments | https://api.github.com/repos/huggingface/transformers/issues/7897/events | https://github.com/huggingface/transformers/issues/7897 | 724,375,221 | MDU6SXNzdWU3MjQzNzUyMjE= | 7,897 | GPT2Tokenizer.add_tokens() didnt change tokenizer.vocab_size | {
"login": "Makigumoe",
"id": 29876156,
"node_id": "MDQ6VXNlcjI5ODc2MTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/29876156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Makigumoe",
"html_url": "https://github.com/Makigumoe",
"followers_url": "https://api.github.com/users/Makigumoe/followers",
"following_url": "https://api.github.com/users/Makigumoe/following{/other_user}",
"gists_url": "https://api.github.com/users/Makigumoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Makigumoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Makigumoe/subscriptions",
"organizations_url": "https://api.github.com/users/Makigumoe/orgs",
"repos_url": "https://api.github.com/users/Makigumoe/repos",
"events_url": "https://api.github.com/users/Makigumoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/Makigumoe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,608 | 1,608 | NONE | null | follow the instruction in add_tokens()
code:
special_tokens = ['<pad>', '<go_r>', '<unk>', '<go_b>', '<go_a>', '<go_u>', ]
num_added_toks = tokenizer.add_tokens(special_tokens)
print('We have added', num_added_toks, 'tokens')
it can successfully encode and decode the newly added tokens, but the GPT2Tokenizer.vocab_size is still 50257. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7897/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7896 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7896/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7896/comments | https://api.github.com/repos/huggingface/transformers/issues/7896/events | https://github.com/huggingface/transformers/issues/7896 | 724,244,101 | MDU6SXNzdWU3MjQyNDQxMDE= | 7,896 | Bert2bert EncoderDecoderModel from Huggingface is generating a zero tensor for any input | {
"login": "Aakash12980",
"id": 33715594,
"node_id": "MDQ6VXNlcjMzNzE1NTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/33715594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aakash12980",
"html_url": "https://github.com/Aakash12980",
"followers_url": "https://api.github.com/users/Aakash12980/followers",
"following_url": "https://api.github.com/users/Aakash12980/following{/other_user}",
"gists_url": "https://api.github.com/users/Aakash12980/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aakash12980/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aakash12980/subscriptions",
"organizations_url": "https://api.github.com/users/Aakash12980/orgs",
"repos_url": "https://api.github.com/users/Aakash12980/repos",
"events_url": "https://api.github.com/users/Aakash12980/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aakash12980/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @Aakash12980,\r\n\r\nCould you please provide a complete code examples with which I can reproduce the results? Did you train the model yourself or did you use a pretrained model? (Which one in case you did?)",
"> Hey @Aakash12980,\r\n> \r\n> Could you please provide a complete code examples with which I can reproduce the results? Did you train the model yourself or did you use a pretrained model? (Which one in case you did?)\r\n\r\n@patrickvonplaten I have used pretrained bert2bert model and I am fine-tuning it for sentence simplification. \r\n\r\nThe full code in run.py is below\r\n\r\n```\r\nlogging.basicConfig(filename=\"./drive/My Drive/Mini Project/log_file.log\", level=logging.INFO, \r\n format=\"%(asctime)s:%(levelname)s: %(message)s\")\r\nCONTEXT_SETTINGS = dict(help_option_names = ['-h', '--help'])\r\n\r\nTRAIN_BATCH_SIZE = 4 \r\nN_EPOCH = 5\r\nLOG_EVERY = 11000\r\n\r\nconfig_encoder = BertConfig()\r\nconfig_decoder = BertConfig()\r\nconfig_decoder.is_decoder = True\r\nconfig_decoder.add_cross_attention = True\r\nconfig = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-cased', 'bert-base-cased', config=config)\r\n\r\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\nprint(f\"Using {device} as device\")\r\n\r\n\r\ndef collate_fn(batch):\r\n data_list, label_list = [], []\r\n for _data, _label in batch:\r\n data_list.append(_data)\r\n label_list.append(_label)\r\n return data_list, label_list\r\n\r\ndef save_model_checkpt(state, is_best, check_pt_path, best_model_path):\r\n f_path = check_pt_path\r\n torch.save(state, f_path)\r\n\r\n if is_best:\r\n best_fpath = best_model_path\r\n shutil.copyfile(f_path, best_fpath)\r\n\r\ndef load_checkpt(checkpt_path, optimizer=None):\r\n if device == \"cpu\":\r\n checkpoint = torch.load(checkpt_path, map_location=torch.device(\"cpu\"))\r\n model.load_state_dict(checkpoint[\"model_state_dict\"])\r\n if optimizer is not None:\r\n optimizer.load_state_dict(checkpoint[\"optimizer_state_dict\"], map_location=torch.device(\"cpu\"))\r\n \r\n else:\r\n model.load_state_dict(checkpoint[\"model_state_dict\"])\r\n if optimizer is not None:\r\n optimizer.load_state_dict(checkpoint[\"optimizer_state_dict\"])\r\n\r\n eval_loss = checkpoint[\"eval_loss\"]\r\n epoch = checkpoint[\"epoch\"]\r\n\r\n return optimizer, eval_loss, epoch\r\n\r\n\r\n\r\ndef evaluate(batch_iter, e_loss):\r\n was_training = model.training\r\n model.eval()\r\n eval_loss = e_loss\r\n\r\n with torch.no_grad():\r\n for step, batch in enumerate(batch_iter):\r\n src_tensors, src_attn_tensors, tgt_tensors, tgt_attn_tensors = generate_tokens(batch)\r\n loss, _ = model(input_ids = src_tensors.to(device), \r\n decoder_input_ids = tgt_tensors.to(device),\r\n attention_mask = src_attn_tensors.to(device),\r\n decoder_attention_mask = tgt_attn_tensors.to(device),\r\n labels=tgt_tensors.to(device))[:2]\r\n \r\n eval_loss += (1/(step+1)) * (loss.item() - eval_loss)\r\n\r\n if was_training:\r\n model.train()\r\n\r\n return eval_loss \r\n\r\[email protected](context_settings=CONTEXT_SETTINGS)\r\[email protected]_option(version = '1.0.0')\r\ndef task():\r\n ''' This is the documentation of the main file. This is the reference for executing this file.'''\r\n pass\r\n\r\n\r\[email protected]()\r\[email protected]('--src_train', default=\"./drive/My Drive/Mini Project/dataset/src_train.txt\", help=\"train source file path\")\r\[email protected]('--tgt_train', default=\"./drive/My Drive/Mini Project/dataset/tgt_train.txt\", help=\"train target file path\")\r\[email protected]('--src_valid', default=\"./drive/My Drive/Mini Project/dataset/src_valid.txt\", help=\"validation source file path\")\r\[email protected]('--tgt_valid', default=\"./drive/My Drive/Mini Project/dataset/tgt_valid.txt\", help=\"validation target file path\")\r\[email protected]('--best_model', default=\"./drive/My Drive/Mini Project/best_model/model.pt\", help=\"best model file path\")\r\[email protected]('--checkpoint_path', default=\"./drive/My Drive/Mini Project/checkpoint/model_ckpt.pt\", help=\" model check point files path\")\r\[email protected]('--seed', default=123, help=\"manual seed value (default=123)\")\r\ndef train(**kwargs):\r\n print(\"Training data module executing...\")\r\n logging.info(f\"Train module invoked.\")\r\n seed = kwargs[\"seed\"]\r\n torch.manual_seed(seed)\r\n if device == \"cuda\":\r\n torch.cuda.manual_seed(seed)\r\n \r\n np.random.seed(seed)\r\n print(\"Loading dataset...\")\r\n \r\n src_train = open_file(kwargs['src_train'])\r\n tgt_train = open_file(kwargs['tgt_train'])\r\n src_valid = open_file(kwargs['src_valid'])\r\n tgt_valid = open_file(kwargs['tgt_valid'])\r\n train_len = len(src_train)\r\n valid_len = len(src_valid)\r\n print(\"Dataset Loaded.\")\r\n\r\n train_dataset = WikiDataset(src_train, tgt_train)\r\n valid_dataset = WikiDataset(src_valid, tgt_valid)\r\n del src_valid, src_train, tgt_train, tgt_valid\r\n\r\n print(\"Creating Dataloader...\")\r\n train_dl = DataLoader(train_dataset, batch_size=TRAIN_BATCH_SIZE, collate_fn=collate_fn, shuffle=True)\r\n valid_dl = DataLoader(valid_dataset, batch_size=TRAIN_BATCH_SIZE, collate_fn=collate_fn, shuffle=True)\r\n print(\"Dataloader created.\")\r\n\r\n model.to(device)\r\n param_optimizer = list(model.named_parameters())\r\n no_decay = ['bias', 'gamma', 'beta']\r\n optimizer_grouped_parameters = [\r\n {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],\r\n 'weight_decay_rate': 0.01},\r\n {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],\r\n 'weight_decay_rate': 0.0}\r\n ]\r\n optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=3e-3)\r\n eval_loss = float('inf')\r\n start_epoch = 0\r\n if os.path.exists(kwargs[\"checkpoint_path\"]):\r\n optimizer, eval_loss, start_epoch = load_checkpt(kwargs[\"checkpoint_path\"], optimizer)\r\n print(f\"Loading model from checkpoint with start epoch: {start_epoch} and loss: {eval_loss}\")\r\n logging.info(f\"Model loaded from saved checkpoint with start epoch: {start_epoch} and loss: {eval_loss}\")\r\n\r\n train_model(start_epoch, eval_loss, (train_dl, valid_dl), optimizer, kwargs[\"checkpoint_path\"], kwargs[\"best_model\"], (train_len, valid_len))\r\n print(\"Model Training Complete!\")\r\n \r\n \r\n\r\[email protected]()\r\[email protected]('--src_test', default=\"./drive/My Drive/Mini Project/dataset/src_test.txt\", help=\"test source file path\")\r\[email protected]('--tgt_test', default=\"./drive/My Drive/Mini Project/dataset/tgt_test.txt\", help=\"test target file path\")\r\[email protected]('--best_model', default=\"./drive/My Drive/Mini Project/best_model/model.pt\", help=\"best model file path\")\r\ndef test(**kwargs):\r\n print(\"Testing Model module executing...\")\r\n logging.info(f\"Test module invoked.\")\r\n\r\n src_test = open_file(kwargs['src_test'])\r\n tgt_test = open_file(kwargs['tgt_test'])\r\n len_data = len(src_test)\r\n\r\n test_dataset = WikiDataset(src_test, tgt_test)\r\n\r\n test_dl = DataLoader(test_dataset, batch_size=TRAIN_BATCH_SIZE, collate_fn=collate_fn, shuffle=True)\r\n\r\n _,_,_ = load_checkpt(kwargs[\"best_model\"])\r\n print(\"Model loaded...\")\r\n model.to(device)\r\n model.eval()\r\n\r\n test_start_time = time.time()\r\n epoch_test_loss = evaluate(test_dl, 0)\r\n epoch_test_loss = epoch_test_loss/len_data\r\n print(f'avg. test loss: {epoch_test_loss:.5f} | time elapsed: {time.time() - test_start_time}')\r\n logging.info(f'avg. test loss: {epoch_test_loss:.5f} | time elapsed: {time.time() - test_start_time}')\r\n print(\"Test Complete!\")\r\n \r\n\r\n#/drive/My Drive/Mini Project\r\[email protected]()\r\[email protected]('--src_file', default=\"./drive/My Drive/Mini Project/dataset/src_file.txt\", help=\"test source file path\")\r\[email protected]('--best_model', default=\"./drive/My Drive/Mini Project/checkpoint/model_ckpt.pt\", help=\"best model file path\")\r\[email protected]('--output', default=\"./drive/My Drive/Mini Project/outputs/decoded.txt\", help=\"file path to save predictions\")\r\ndef decode(**kwargs):\r\n print(\"Decoding sentences module executing...\")\r\n logging.info(f\"Decode module invoked.\")\r\n src_test = open_file(kwargs['src_file'])\r\n print(\"Saved model loading...\")\r\n _,_,_ = load_checkpt(kwargs[\"best_model\"])\r\n print(f\"Model loaded.\")\r\n model.to(device)\r\n model.eval()\r\n inp_tokens = create_sent_tokens(src_test)\r\n predicted_list = []\r\n print(\"Decoding Sentences...\")\r\n for tensor in inp_tokens:\r\n with torch.no_grad():\r\n predicted = model.generate(tensor.to(device), decoder_start_token_id=model.config.decoder.pad_token_id)\r\n print(f\"input: {tensor}\")\r\n print(f'output: {predicted.squeeze()}')\r\n predicted_list.append(predicted.squeeze())\r\n \r\n output = get_sent_from_tokens(predicted_list)\r\n with open(kwargs[\"output\"], \"w\") as f:\r\n for sent in output:\r\n f.write(sent + \"\\n\")\r\n print(\"Output file saved successfully.\")\r\n\r\ndef train_model(start_epoch, eval_loss, loaders, optimizer, check_pt_path, best_model_path, len_data):\r\n best_eval_loss = eval_loss\r\n print(\"Model training started...\")\r\n for epoch in range(start_epoch, N_EPOCH):\r\n print(f\"Epoch {epoch} running...\")\r\n epoch_start_time = time.time()\r\n epoch_train_loss = 0\r\n epoch_eval_loss = 0\r\n\r\n model.train()\r\n for step, batch in enumerate(loaders[0]):\r\n\r\n src_tensors, src_attn_tensors, tgt_tensors, tgt_attn_tensors = generate_tokens(batch)\r\n\r\n optimizer.zero_grad()\r\n model.zero_grad()\r\n loss = model(input_ids = src_tensors.to(device), \r\n decoder_input_ids = tgt_tensors.to(device),\r\n attention_mask = src_attn_tensors.to(device),\r\n decoder_attention_mask = tgt_attn_tensors.to(device),\r\n labels = tgt_tensors.to(device))[0]\r\n \r\n loss.backward()\r\n optimizer.step()\r\n epoch_train_loss += (1/(step+1))*(loss.item() - epoch_train_loss)\r\n\r\n if (step+1) % LOG_EVERY == 0:\r\n print(f'Epoch: {epoch} | iter: {step+1} | avg. loss: {epoch_train_loss/TRAIN_BATCH_SIZE} | time elapsed: {time.time() - epoch_start_time}')\r\n logging.info(f'Epoch: {epoch} | iter: {step+1} | avg. loss: {epoch_train_loss/TRAIN_BATCH_SIZE} | time elapsed: {time.time() - epoch_start_time}')\r\n eval_start_time = time.time()\r\n epoch_eval_loss = evaluate(loaders[1], epoch_eval_loss)\r\n epoch_eval_loss = epoch_eval_loss/TRAIN_BATCH_SIZE\r\n print(f'Completed Epoch: {epoch} | avg. eval loss: {epoch_eval_loss:.5f} | time elapsed: {time.time() - eval_start_time}')\r\n logging.info(f'Completed Epoch: {epoch} | avg. eval loss: {epoch_eval_loss:.5f} | time elapsed: {time.time() - eval_start_time}')\r\n \r\n check_pt = {\r\n 'epoch': epoch+1,\r\n 'model_state_dict': model.state_dict(),\r\n 'optimizer_state_dict': optimizer.state_dict(),\r\n 'eval_loss': epoch_eval_loss\r\n }\r\n check_pt_time = time.time()\r\n print(\"Saving Checkpoint.......\")\r\n if epoch_eval_loss < best_eval_loss:\r\n print(\"New best model found\")\r\n logging.info(f\"New best model found\")\r\n best_eval_loss = epoch_eval_loss\r\n save_model_checkpt(check_pt, True, check_pt_path, best_model_path)\r\n else:\r\n save_model_checkpt(check_pt, False, check_pt_path, best_model_path) \r\n print(f\"Checkpoint saved successfully with time: {time.time() - check_pt_time}\")\r\n logging.info(f\"Checkpoint saved successfully with time: {time.time() - check_pt_time}\")\r\n \r\n gc.collect()\r\n torch.cuda.empty_cache() \r\n\r\n\r\nif __name__ == \"__main__\":\r\n task()\r\n```\r\n\r\nThe GitHub link for my full project is here [https://github.com/Aakash12980/Sentence-Simplification-using-Transformer](url). ",
"Hey @Aakash12980 - I think you forgot to replace the pad_token_ids with -100 in your code. You can checkout the map function here: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#training-script to see how this should be done.",
"> Hey @Aakash12980 - I think you forgot to replace the pad_token_ids with -100 in your code. You can checkout the map function here: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#training-script to see how this should be done.\r\n\r\n@patrickvonplaten I have included -100 as well in the labels and again I am getting the same weird outputs from my model.generate() method. \r\n```\r\nSetting `pad_token_id` to 102 (first `eos_token_id`) to generate sequence\r\noutput: tensor([101, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119], device='cuda:0')\r\nSetting `pad_token_id` to 102 (first `eos_token_id`) to generate sequence\r\noutput: tensor([101, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,\r\n 119, 119], device='cuda:0')\r\n```\r\n\r\nMy updated code is this link: [https://github.com/Aakash12980/Sentence-Simplification-using-Transformer](url)",
"Hey @Aakash12980, hmm - I'm not sure what to do here then either. I've done quite a lot of Bert2Bert fine-tune runs and they all worked for me. Also I will publish a more in-detail notebook in ~2 weeks on how to do Bert2Bert training. \r\n\r\nIt's quite time-consuming and difficult for me to dive deeper into your code, so the best I can do for you at the moment is this training code snippet: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#training-script",
"@patrickvonplaten Thank you so much for your time. ",
"@patrickvonplaten Could you please explain to me why you have assigned -100 to PAD tokens?\r\n```\r\n# mask loss for padding\r\n batch[\"labels\"] = [\r\n [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch[\"labels\"]\r\n ]\r\n```\r\nI don't quite understand what mask loss for padding means. Also, Do I need to pass labels when I am not using Trainer() method?\r\n```\r\nloss, logits = self.model(input_ids = src_tensors.to(device), \r\n decoder_input_ids = tgt_tensors.to(device),\r\n attention_mask = src_attn_tensors.to(device),\r\n decoder_attention_mask = tgt_attn_tensors.to(device),\r\n labels = labels.to(device))[:2]\r\n```\r\nWhy do we pass labels since we have already provided decoder_iniput_ids?",
"if label == -100, it means that on such tokens no loss is calculated and therefore does not affect the gradient. We don't want the model to learn to predict PAD tokens, so we make sure that no loss is calculated by putting a -100 on them. PyTorch's CE loss function uses -100 as the ignore token number.",
"Thank you @patrickvonplaten I have trained my model but my model is generating tokens exactly similar to the input tokens. I tried to find out what have I done wrong but I couldn't. I don't know why it is generating the same tokens as inputs. Do you have any idea?\r\n```\r\nModel loaded.\r\nDecoding Sentences...\r\ninput tokens: tensor([[ 101, 140, 23339, 1706, 12788, 140, 22715, 8469, 1106, 2194,\r\n 1103, 5080, 1104, 172, 23339, 17458, 1107, 170, 172, 22715,\r\n 24759, 3536, 119, 102]])\r\nSetting `pad_token_id` to 102 (first `eos_token_id`) to generate sequence\r\noutput tokens: tensor([ 101, 140, 23339, 1706, 12788, 140, 22715, 8469, 1106, 2194,\r\n 1103, 5080, 1104, 172, 23339, 17458, 1107, 170, 172, 22715,\r\n 24759, 3536, 119, 102], device='cuda:0')\r\ninput tokens: tensor([[ 101, 18959, 18072, 1108, 1549, 1103, 13852, 6607, 1104, 156,\r\n 14046, 3263, 1670, 1118, 1624, 1600, 1107, 18563, 1571, 117,\r\n 1106, 9489, 1105, 2669, 1103, 2226, 112, 188, 16358, 20408,\r\n 119, 102]])\r\nSetting `pad_token_id` to 102 (first `eos_token_id`) to generate sequence\r\noutput tokens: tensor([ 101, 18959, 18072, 1108, 1549, 1103, 13852, 6607, 1104, 156,\r\n 14046, 3263, 1670, 1118, 1624, 1600, 1107, 18563, 1571, 117,\r\n 1106, 9489, 1105, 2669, 1103, 2226, 112, 188, 16358, 20408,\r\n 119, 102], device='cuda:0')\r\ninput tokens: tensor([[ 101, 153, 14272, 1633, 1110, 170, 5188, 1107, 1103, 1318,\r\n 26042, 2853, 1107, 1564, 118, 2466, 1699, 119, 102]])\r\nSetting `pad_token_id` to 102 (first `eos_token_id`) to generate sequence\r\noutput tokens: tensor([ 101, 153, 14272, 1633, 1110, 170, 5188, 1107, 1103, 1318,\r\n 26042, 2853, 1107, 1564, 118, 2466, 1699, 119, 102],\r\n device='cuda:0')\r\n```\r\nIt is driving me crazy. I ran the training module for 4 epochs and this was the best model I got so far.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | I am using Bert2Bert EncoderDecoderModel from Huggingface for sentence simplification. But my model is generating a zero tensor of the same length regardless of the input. Could someone help me what is wrong in my project. The GitHub link of my project is this https://github.com/Aakash12980/Sentence-Simplification-using-Transformer. Please help me through this.
My decode module is like this:
`
@task.command()
@click.option('--src_test', default="./drive/My Drive/Mini Project/dataset/src_test.txt", help="test source file path")
@click.option('--tgt_test', default="./drive/My Drive/Mini Project/dataset/tgt_test.txt", help="test target file path")
@click.option('--best_model', default="./drive/My Drive/Mini Project/best_model/model.pt", help="best model file path")
def test(**kwargs):
print("Testing Model module executing...")
logging.info(f"Test module invoked.")
src_test = open_file(kwargs['src_test'])
tgt_test = open_file(kwargs['tgt_test'])
len_data = len(src_test)
test_dataset = WikiDataset(src_test, tgt_test)
test_dl = DataLoader(test_dataset, batch_size=TRAIN_BATCH_SIZE, collate_fn=collate_fn, shuffle=True)
_,_,_ = load_checkpt(kwargs["best_model"])
print("Model loaded...")
model.to(device)
model.eval()
test_start_time = time.time()
epoch_test_loss = evaluate(test_dl, 0)
epoch_test_loss = epoch_test_loss/len_data
print(f'avg. test loss: {epoch_test_loss:.5f} | time elapsed: {time.time() - test_start_time}')
logging.info(f'avg. test loss: {epoch_test_loss:.5f} | time elapsed: {time.time() - test_start_time}')
print("Test Complete!")
`
and the output its generates is this:
`input: tensor([[ 101, 1188, 1108, 1272, 11310, 1125, 1136, 1151, 2548, 1106,
2732, 20386, 2692, 117, 2693, 1117, 16975, 117, 1105, 1103,
8183, 1571, 1105, 8183, 1545, 3784, 118, 8581, 1125, 3335,
1136, 1678, 2629, 119, 102]])
output: tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
input: tensor([[ 101, 1109, 4665, 1108, 6497, 1173, 1112, 1126, 169, 169,
6548, 21718, 24226, 25285, 112, 112, 117, 1112, 1103, 1586,
1108, 169, 169, 16314, 1105, 4489, 1200, 1190, 1103, 6188,
117, 4411, 1126, 5340, 1895, 4252, 20473, 4626, 2116, 1104,
9494, 112, 112, 119, 102]])
output: tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
input: tensor([[ 101, 13030, 1643, 2883, 1144, 3567, 8593, 9648, 1958, 117,
1259, 1210, 4748, 1453, 4278, 117, 1160, 1635, 8793, 112,
1635, 4278, 1105, 170, 6068, 1635, 1509, 119, 102]])
output: tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7896/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7895 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7895/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7895/comments | https://api.github.com/repos/huggingface/transformers/issues/7895/events | https://github.com/huggingface/transformers/pull/7895 | 724,184,781 | MDExOlB1bGxSZXF1ZXN0NTA1NjE2Nzc3 | 7,895 | [testing] slow tests should be marked as slow | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> For common tests to be marked slow, the slowest iteration of that common test must be > 15s.\r\n\r\nThis one probably needs to be more specific, if for a normal test we decide a threshold of 5s, and you get a common test runs like: 14s, 13, 13, 12s - this is still too slow. perhaps an average of all commont tests for this test should be calculated?\r\n\r\ne.g. this one:\r\n\r\n```\r\n$ grep test_model_outputs_equivalence stats.txt | perl -ne 's|^(.*?)s.|$x+=$1|e; END {print int $x}'\r\n230\r\n$ grep test_model_outputs_equivalence stats.txt | wc -l\r\n46\r\n$ perl -le 'print 230/46'\r\n5\r\n```\r\nHow interesting that it hits 5sec exactly, but it's on my machine so need to re-eval on CI.\r\n\r\nHere is a one liner to do all 3 lines at once: (`$.` contains the same data as `wc -l` - line counter in perl)\r\n```\r\n$ grep test_model_outputs_equivalence stats.txt | perl -ne 's|^(.*?)s.|$x+=$1|e; END {print int $x/$.}'\r\n5\r\n```\r\n\r\nThe full picture for this common test:\r\n```\r\n20.11s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_model_outputs_equivalence\r\n16.19s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_model_outputs_equivalence\r\n13.49s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_model_outputs_equivalence\r\n9.94s call tests/test_modeling_bert.py::BertModelTest::test_model_outputs_equivalence\r\n9.56s call tests/test_modeling_albert.py::AlbertModelTest::test_model_outputs_equivalence\r\n8.81s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_model_outputs_equivalence\r\n8.29s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_model_outputs_equivalence\r\n7.98s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_model_outputs_equivalence\r\n7.87s call tests/test_modeling_xlnet.py::XLNetModelTest::test_model_outputs_equivalence\r\n6.85s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_model_outputs_equivalence\r\n6.81s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_model_outputs_equivalence\r\n6.30s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_model_outputs_equivalence\r\n6.25s call tests/test_modeling_roberta.py::RobertaModelTest::test_model_outputs_equivalence\r\n5.90s call tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_model_outputs_equivalence\r\n5.81s call tests/test_modeling_electra.py::ElectraModelTest::test_model_outputs_equivalence\r\n5.79s call tests/test_modeling_distilbert.py::DistilBertModelTest::test_model_outputs_equivalence\r\n5.69s call tests/test_modeling_xlm.py::XLMModelTest::test_model_outputs_equivalence\r\n5.35s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_model_outputs_equivalence\r\n4.64s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_model_outputs_equivalence\r\n4.34s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_model_outputs_equivalence\r\n3.79s call tests/test_modeling_dpr.py::DPRModelTest::test_model_outputs_equivalence\r\n3.71s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_model_outputs_equivalence\r\n3.61s call tests/test_modeling_bart.py::BARTModelTest::test_model_outputs_equivalence\r\n3.58s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_model_outputs_equivalence\r\n3.57s call tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_model_outputs_equivalence\r\n3.53s call tests/test_modeling_ctrl.py::CTRLModelTest::test_model_outputs_equivalence\r\n3.40s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_outputs_equivalence\r\n3.31s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_model_outputs_equivalence\r\n3.19s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_model_outputs_equivalence\r\n3.12s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_model_outputs_equivalence\r\n2.98s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_model_outputs_equivalence\r\n2.93s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_model_outputs_equivalence\r\n2.80s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_model_outputs_equivalence\r\n2.59s call tests/test_modeling_longformer.py::LongformerModelTest::test_model_outputs_equivalence\r\n2.37s call tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_model_outputs_equivalence\r\n2.17s call tests/test_modeling_funnel.py::FunnelModelTest::test_model_outputs_equivalence\r\n2.13s call tests/test_modeling_fsmt.py::FSMTModelTest::test_model_outputs_equivalence\r\n2.02s call tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_model_outputs_equivalence\r\n1.94s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_model_outputs_equivalence\r\n1.70s call tests/test_modeling_t5.py::T5ModelTest::test_model_outputs_equivalence\r\n1.60s call tests/test_modeling_deberta.py::DebertaModelTest::test_model_outputs_equivalence\r\n1.44s call tests/test_modeling_lxmert.py::LxmertModelTest::test_model_outputs_equivalence\r\n1.22s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_model_outputs_equivalence\r\n1.11s call tests/test_modeling_reformer.py::ReformerLSHAttnModelTest::test_model_outputs_equivalence\r\n0.86s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_model_outputs_equivalence\r\n0.33s call tests/test_modeling_blenderbot.py::BlenderbotTesterMixin::test_model_outputs_equivalence\r\n```\r\n```",
"Please have a look at this recent [report](https://github.com/huggingface/transformers/issues/7885#issuecomment-712287004) and let me know if anything else should be marked as slow.\r\n",
"@LysandreJik, I think the 3 of us have had a good go at it. At your convenience please review the resulting prose and if there is something to modify please proceed to change it directly and then merge it. Much appreciated!"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | As discussed in https://github.com/huggingface/transformers/issues/7885 this is an effort to make the test suite manageable execution time-wise, as the number of tests is growing and it takes much longer to complete the tests on CI.
* [x] document and set a standard for when a test needs to be marked as `@slow`
* [x] Marks slow the following tests:
- tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_pretrained (23s)
- tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_translation (17s)
- tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_pipeline (35s)
Fixes: https://github.com/huggingface/transformers/issues/7885
@LysandreJik, @sgugger, @sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7895/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7895",
"html_url": "https://github.com/huggingface/transformers/pull/7895",
"diff_url": "https://github.com/huggingface/transformers/pull/7895.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7895.patch",
"merged_at": 1603362845000
} |
https://api.github.com/repos/huggingface/transformers/issues/7894 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7894/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7894/comments | https://api.github.com/repos/huggingface/transformers/issues/7894/events | https://github.com/huggingface/transformers/issues/7894 | 724,184,530 | MDU6SXNzdWU3MjQxODQ1MzA= | 7,894 | AdamW | {
"login": "guoxuxu",
"id": 29363464,
"node_id": "MDQ6VXNlcjI5MzYzNDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29363464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guoxuxu",
"html_url": "https://github.com/guoxuxu",
"followers_url": "https://api.github.com/users/guoxuxu/followers",
"following_url": "https://api.github.com/users/guoxuxu/following{/other_user}",
"gists_url": "https://api.github.com/users/guoxuxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guoxuxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guoxuxu/subscriptions",
"organizations_url": "https://api.github.com/users/guoxuxu/orgs",
"repos_url": "https://api.github.com/users/guoxuxu/repos",
"events_url": "https://api.github.com/users/guoxuxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/guoxuxu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, yes, this is a warning, nothing you should be afraid of."
] | 1,603 | 1,603 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
Using AdamW of transformers 2.5.1 or 2.6.0, I got the following:
/home/user/anaconda3/lib/python3.7/site-packages/transformers/optimization.py:155: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/utils/python_arg_parser.cpp:766.)
exp_avg.mul_(beta1).add_(1.0 - beta1, grad)
- `transformers` version:
- Platform: linux
- Python version: 3.7.4
- PyTorch version (GPU?): 1.6.0
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7894/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7893 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7893/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7893/comments | https://api.github.com/repos/huggingface/transformers/issues/7893/events | https://github.com/huggingface/transformers/issues/7893 | 724,181,941 | MDU6SXNzdWU3MjQxODE5NDE= | 7,893 | pip install transformers by default install 2.5.1 | {
"login": "guoxuxu",
"id": 29363464,
"node_id": "MDQ6VXNlcjI5MzYzNDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29363464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guoxuxu",
"html_url": "https://github.com/guoxuxu",
"followers_url": "https://api.github.com/users/guoxuxu/followers",
"following_url": "https://api.github.com/users/guoxuxu/following{/other_user}",
"gists_url": "https://api.github.com/users/guoxuxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guoxuxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guoxuxu/subscriptions",
"organizations_url": "https://api.github.com/users/guoxuxu/orgs",
"repos_url": "https://api.github.com/users/guoxuxu/repos",
"events_url": "https://api.github.com/users/guoxuxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/guoxuxu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Running `pip install transformers` installs the latest version for me. Which pip version are you running on?\r\n\r\nYou can install both cpu or gpu for tensorflow, if you want to run on a cpu or a gpu.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
https://huggingface.co/transformers/installation.html
Running '''pip install transformers''' by default installs 2.5.1
Could you also specify TensorFlow CPU or GPU version should be installed
- `transformers` version:
- Platform:
- Python version: 3.7.4
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7893/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7892 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7892/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7892/comments | https://api.github.com/repos/huggingface/transformers/issues/7892/events | https://github.com/huggingface/transformers/issues/7892 | 724,172,254 | MDU6SXNzdWU3MjQxNzIyNTQ= | 7,892 | Issue with XLM-R for multiple-choice questions | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Thanks for reporting, will investigate.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @danyaljj and @LysandreJik I have the same issue. Are there any suggestions for running xlm-r (base/large) with multiple-choice qa?"
] | 1,603 | 1,610 | 1,608 | CONTRIBUTOR | null | Hi there,
I am not able to reasonable numbers with XLM models ("xlm-roberta-base" "xlm-roberta-large") when I test them on [multiple choice questions](https://github.com/huggingface/transformers/tree/master/examples/multiple-choice).
I suspect that it's related to issue #7774.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): 3.6.0 (yes)
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
### Who can help
@LysandreJik @sgugger @VictorSanh
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Here is the script you can try, based on [these instructions](https://github.com/huggingface/transformers/tree/master/examples/multiple-choice):
```
export SWAG_DIR=/path/to/swag_data_dir
python ./examples/multiple-choice/run_multiple_choice.py \
--task_name swag \
--model_name_or_path xlm-roberta-base \
--do_train \
--do_eval \
--data_dir $SWAG_DIR \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--max_seq_length 80 \
--output_dir models_bert/swag_base \
--per_gpu_eval_batch_size=16 \
--per_device_train_batch_size=16 \
--gradient_accumulation_steps 2 \
--overwrite_output
```
It should give you under 30% -- near random.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7892/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7891 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7891/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7891/comments | https://api.github.com/repos/huggingface/transformers/issues/7891/events | https://github.com/huggingface/transformers/pull/7891 | 724,132,905 | MDExOlB1bGxSZXF1ZXN0NTA1NTcxNDE5 | 7,891 | [RAG] Propagating of n_docs as parameter to all RagModel's related functions | {
"login": "lalitpagaria",
"id": 19303690,
"node_id": "MDQ6VXNlcjE5MzAzNjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/19303690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lalitpagaria",
"html_url": "https://github.com/lalitpagaria",
"followers_url": "https://api.github.com/users/lalitpagaria/followers",
"following_url": "https://api.github.com/users/lalitpagaria/following{/other_user}",
"gists_url": "https://api.github.com/users/lalitpagaria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lalitpagaria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lalitpagaria/subscriptions",
"organizations_url": "https://api.github.com/users/lalitpagaria/orgs",
"repos_url": "https://api.github.com/users/lalitpagaria/repos",
"events_url": "https://api.github.com/users/lalitpagaria/events{/privacy}",
"received_events_url": "https://api.github.com/users/lalitpagaria/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@lhoestq would be great if you can review as well",
"@patrickvonplaten Thanks for the review.\r\n\r\nwhile working on this PR I found that in `RagTokenForGeneration` we are computing `batch_size` as follows -\r\n```\r\nbatch_size = context_input_ids.shape[0] // n_docs\r\n```\r\nSo still issue can come when `((context_input_ids.shape[0] % n_docs) != 0)`, but I can't think of solution to address this.",
"> @patrickvonplaten Thanks for the review.\r\n> \r\n> while working on this PR I found that in `RagTokenForGeneration` we are computing `batch_size` as follows -\r\n> \r\n> ```\r\n> batch_size = context_input_ids.shape[0] // n_docs\r\n> ```\r\n> \r\n> So still issue can come when `((context_input_ids.shape[0] % n_docs) != 0)`, but I can't think of solution to address this.\r\n\r\n`context_input_ids` is always supposed to have a size of `n_docs` times the number of input questions ",
"> > @patrickvonplaten Thanks for the review.\r\n> > while working on this PR I found that in `RagTokenForGeneration` we are computing `batch_size` as follows -\r\n> > ```\r\n> > batch_size = context_input_ids.shape[0] // n_docs\r\n> > ```\r\n> > \r\n> > \r\n> > So still issue can come when `((context_input_ids.shape[0] % n_docs) != 0)`, but I can't think of solution to address this.\r\n> \r\n> `context_input_ids` is always supposed to have a size of `n_docs` times the number of input questions\r\n\r\nIt would be better if we mention it explicitly by assert. WDYT? \r\nIn one of my test case I used `n_docs=3` for retriever and `n_docs=2` for generator and it failed",
"> It would be better if we mention it explicitly by assert. WDYT?\r\n> In one of my test case I used `n_docs=3` for retriever and `n_docs=2` for generator and it failed\r\n\r\nYes indeed. Also if `((context_input_ids.shape[0] % n_docs) != 0)` then we should raise an error otherwise some retrieved documents will be ignored for generation.",
"Yes @lalitpagaria - it would be nice if you can add an asserte statement verifying that `n_docs` is correctly set. `n_docs` should be the same for both retriever and generator.",
"@patrickvonplaten @lhoestq Added assert at two places please verify, along with supporting unit test. Pardon my naming convention for test function, and please suggest proper name :) \r\n\r\n> n_docs should be the same for both retriever and generator.\r\n\r\nThis can't be check if `generator` does not know about `retriever` hence using this `((context_input_ids.shape[0] % n_docs) != 0)`",
"@patrickvonplaten and @lhoestq Thanks for the review. I liked the test coverage of this project. Initially I struggled but letter all worked nicely. You can merge when you want.",
"Slow tests pass => ready to merge",
"Good job @lalitpagaria !"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7874
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7891/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7891",
"html_url": "https://github.com/huggingface/transformers/pull/7891",
"diff_url": "https://github.com/huggingface/transformers/pull/7891.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7891.patch",
"merged_at": 1603113352000
} |
https://api.github.com/repos/huggingface/transformers/issues/7890 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7890/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7890/comments | https://api.github.com/repos/huggingface/transformers/issues/7890/events | https://github.com/huggingface/transformers/pull/7890 | 724,097,360 | MDExOlB1bGxSZXF1ZXN0NTA1NTQxNzYy | 7,890 | [wip] improved github actions workflow | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sshleifer, I don't think I can do anything here w/o repo permissions. I have no access to the runners.\r\n\r\nhttps://github.com/stas00/transformers/actions/runs/319165633\r\n\r\n> No runner matching the specified labels was found: self-hosted, multi-gpu",
"github is so borked! It now won't let me delete the new workflow - and keeps running it automatically and emailing me on each commit I make elsewhere in this project - what horrors!",
"ask on https://github.community/, they are pretty responsive."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This is a WIP work on https://github.com/huggingface/transformers/issues/7887 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7890/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7890",
"html_url": "https://github.com/huggingface/transformers/pull/7890",
"diff_url": "https://github.com/huggingface/transformers/pull/7890.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7890.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7889 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7889/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7889/comments | https://api.github.com/repos/huggingface/transformers/issues/7889/events | https://github.com/huggingface/transformers/issues/7889 | 724,088,895 | MDU6SXNzdWU3MjQwODg4OTU= | 7,889 | from_pretrained incompatible with the models being downloaded | {
"login": "sebastianbujwid",
"id": 11908481,
"node_id": "MDQ6VXNlcjExOTA4NDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/11908481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sebastianbujwid",
"html_url": "https://github.com/sebastianbujwid",
"followers_url": "https://api.github.com/users/sebastianbujwid/followers",
"following_url": "https://api.github.com/users/sebastianbujwid/following{/other_user}",
"gists_url": "https://api.github.com/users/sebastianbujwid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sebastianbujwid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebastianbujwid/subscriptions",
"organizations_url": "https://api.github.com/users/sebastianbujwid/orgs",
"repos_url": "https://api.github.com/users/sebastianbujwid/repos",
"events_url": "https://api.github.com/users/sebastianbujwid/events{/privacy}",
"received_events_url": "https://api.github.com/users/sebastianbujwid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I let @LysandreJik answer about the model but I have a tangential question I love to have some information about @sebastianbujwid (we may be able to help):\r\nWhy is it not possible for you to update transformers past the 2.3.0 version?",
"Hello! Indeed, there has been an issue with the TF ALBERT model that has been resolved in the following commit: https://github.com/huggingface/transformers/commit/1abd53b1aa2f15953bbbbbfefda885d1d9c9d94b#diff-143909fef69337378ffd5ccb1af28c3855ba498a11296fb62886b854d269342d\r\n\r\nThis commit is, unfortunately, only available on version v2.5.1. Is it possible for you to update your version to v2.5.1? If not, let me know specifically which models you would like and I'll give you URLs you can use to download these specific model weights, but please understand that *before v2.5.1*, the TF version of the ALBERT models will not correctly if using them with heads (e.g. SequenceClassification, TokenClassification, QuestionAnswering, etc.)",
"Thank you so much for the quick response!\r\n\r\nI did more digging into the issue and realized that I actually still have the old model weights in my `.cache`, they are just not used since the model checks the URL and downloads the new weights anyway (expects the cached weights with the newest e-tag).\r\n\r\nAlthough I haven't verified it yet, I'm quite confident I should be able to either use the cached weights I have or maybe upgrade to v2.5.1 (assuming the API hasn't changed much, for practical reasons I'd strongly prefer not to change the code since would have run additional test and re-do at least some of my main experiments).\r\nOtherwise, I might reach out again for the specific URLs. Thanks :)\r\n\r\nMaybe the issue with incompatibility is not fixable for older versions anymore but for the future maybe it would be good to do some versioning of the weight files and check for the right version when downloading/reading from the cache, in case that's not done yet? It's a shame if some code suddenly stops working without being changed.\r\nRegarding the logging, after a quick check now I see that the recent code from the main branch uses `logging.warning` instead which should at least make the issue easier to notice (sadly, it took me a few days to realize that all my problems were due to the model not loading the pre-trained weights - apparently ALBERT with random weights still works much better than random on certain problems).",
"You're absolutely right that these incompatibility issues are due to not having a way to version models. You'll be happy to hear that we're currently working on this internally :).\r\n\r\nThank you for opening such a detailed issue!"
] | 1,603 | 1,615 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Platform: Linux
- Python version: 3.7.7
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
-->
@LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
```
import logging
logging.basicConfig(level=logging.DEBUG)
from transformers import TFAlbertModel
model = TFAlbertModel.from_pretrained('albert-xxlarge-v2')
```
**Problem:**
The model does not load the weights. I assume due to some incompatibility. The output:
```
INFO:transformers.modeling_tf_utils:Layers of TFAlbertModel not initialized from pretrained model: ['pooler', 'encoder', 'embeddings']
INFO:transformers.modeling_tf_utils:Layers from pretrained model not used in TFAlbertModel: ['albert', 'predictions']
```
I suppose the model from https://s3.amazonaws.com/models.huggingface.co/bert/albert-xxlarge-v2-tf_model.h5 has been changed and is incompatible now. I cannot find the older version of the model anywhere.
## To reproduce
Steps to reproduce the behavior:
(See script above)
1. (Assume) Clean model cache
2. Enable full logging
3. Run TFAlbertModel.from_pretrained('albert-xxlarge-v2')
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
It should load the weights correctly (or at least give an error/warning), not simply fail silently!
<!-- A clear and concise description of what you would expect to happen. -->
---
## Question
Is there a way to access older Albert models (for `transformers==2.3.0`)?
Those from https://s3.amazonaws.com/models.huggingface.co/bert/albert-xxlarge-v2-tf_model.h5 seem to be incompatible.
I really cannot modify the code now or upgrade `transformers` version, but it would totally save me if someone had available the older version of the model. Would it be possible to share (at least all Albert models)?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7889/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7888 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7888/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7888/comments | https://api.github.com/repos/huggingface/transformers/issues/7888/events | https://github.com/huggingface/transformers/pull/7888 | 724,083,940 | MDExOlB1bGxSZXF1ZXN0NTA1NTI5Njky | 7,888 | [tests] fix slow bart cnn test, faster marian tests | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Merging as this just fixes tests."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | + bart slow cnn test needs to not consider special tokens
+ Marian integration tests run `setupClass`, which downloads a tokenizer, even if they are skipped. By moving that logic to skipped property, we can save 40s of useless tokenizer downloads.
cc @stas00
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7888/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7888/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7888",
"html_url": "https://github.com/huggingface/transformers/pull/7888",
"diff_url": "https://github.com/huggingface/transformers/pull/7888.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7888.patch",
"merged_at": 1603066689000
} |
https://api.github.com/repos/huggingface/transformers/issues/7887 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7887/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7887/comments | https://api.github.com/repos/huggingface/transformers/issues/7887/events | https://github.com/huggingface/transformers/issues/7887 | 724,081,873 | MDU6SXNzdWU3MjQwODE4NzM= | 7,887 | Github actions: more readable/informative output | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | null | [] | [
"I will surely give it a try, @sshleifer ",
"So the idea of running my own gh action didn't quite work https://github.com/huggingface/transformers/pull/7890 as I don't have the right permissions to do so.\r\n\r\nBut that doesn't mean this issue can't be worked on.\r\n\r\n# Making information easier/quicker to find \r\n\r\nBasically we have the logs of the tests run and nothing else. Currently finding something in those logs is not trivial, so this part can definitely be improved. We could `tee` the logs into a file and then do a postprocessing on those (will need to figure out how to run a subsequent task despite the test suite task failure, but I'm sure in the worst case some bash magic could work around it). So we can make various clickable tasks that will show just specific outputs. I can also check whether there are some pytest extensions that might help here. Or we might write our own extension that will do just that - log different interesting things into different files.\r\n\r\nYou can tell me which info chunks you want and I can work on generating those from the log file. Currently you mentioned:\r\n- show slow tests (already in the log file, just need to make them a separate one-click accessible item, I think pytest has an option to log that into a separate report)\r\n- show just the errors\r\n\r\nanything else?\r\n\r\nThis part would be just as useful to normal CIs.\r\n\r\n# Communicating with specific user\r\n\r\na. the first problem is which user? a scheduled job may have dozens of merges in it - how do you find out which of them made the commit that caused a failure? \r\n\r\n Unless you make some kind of contact-table and look up the contact based on which test file failed. I was under the impression you wanted to contact not the specific maintainer, but the original PR creator.\r\n\r\nb. any push communications require some kind of auth, which would be very tricky with public repo. it could probably use some webhooks to ping some external server that could do something with that info. e.g. it could have the right auth and post on failure to a slack channel which devs can watch - easier than watching git hub actions (pull model).\r\n",
"@sgugger suggested that CircleCI has an Artifacts tab, where we could add various bit of info.",
"I want exactly the artifacts file of the circleci job (just uses `|tee output.txt`) available in github actions, but it sounds like that is hard.\r\n\r\nRe, communicating with specific user > can be just lysandre for now. I already added the SLACK_WEBHOOK_URL to secrets, but this should definitely not be your task given privs.\r\n",
"To automate the finding of the specific user we will need to `git bisect` for the range of commits since the last check with a rerun of just the failing tests , which will give us the first hash, and thus the PR that caused it. \r\n\r\nThis efficient approach may not work 100% of the time, if the failing test happens due to some other test (test dependency) affecting it, so if this speedy reduction doesn't work, then the full test suite will need to be re-run in conjunction with `git bisect`.",
"WIP https://github.com/huggingface/transformers/pull/7995"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | ### TLDR
The goal of this Issue is to make the output of https://github.com/huggingface/transformers/runs/1269744369?check_suite_focus=true more readable, without changing the fact that the job failed.
#### Current Workflow:
I randomly remember to check github actions every 3 days, scroll a long way to see what broke, figure out who to assign (sometimes easy/sometimes hard), and then make issues or slack that person. The goal of this PR is to try to make that process easier by attacking the first two steps, but ideas for automating parts of the second two steps are welcome.
#### Goals:
- user can click on github actions and see what broke quickly.
- (Optional) user can see 50 slowest tests
- (HARD/Optional) user can subscribe to slack message/email that pings them when action breaks (Related: https://github.community/t/subscribing-to-actions-notifications/135328)
#### How to attack this PR
I would make a new github action file that runs only on your branch and makes the output/artifacts files you desire. Then, once it is working, apply similar logic to `.github/workflows/self-scheduled.yml`. Once PR is merged, send follow up PR applying same logic to `.github/workflows/self-push.yml`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7887/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7886 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7886/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7886/comments | https://api.github.com/repos/huggingface/transformers/issues/7886/events | https://github.com/huggingface/transformers/issues/7886 | 724,079,546 | MDU6SXNzdWU3MjQwNzk1NDY= | 7,886 | Question answering example errors with BrokenPipeError: [Errno 32] Broken pipe | {
"login": "Code4SAFrankie",
"id": 6196390,
"node_id": "MDQ6VXNlcjYxOTYzOTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6196390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Code4SAFrankie",
"html_url": "https://github.com/Code4SAFrankie",
"followers_url": "https://api.github.com/users/Code4SAFrankie/followers",
"following_url": "https://api.github.com/users/Code4SAFrankie/following{/other_user}",
"gists_url": "https://api.github.com/users/Code4SAFrankie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Code4SAFrankie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Code4SAFrankie/subscriptions",
"organizations_url": "https://api.github.com/users/Code4SAFrankie/orgs",
"repos_url": "https://api.github.com/users/Code4SAFrankie/repos",
"events_url": "https://api.github.com/users/Code4SAFrankie/events{/privacy}",
"received_events_url": "https://api.github.com/users/Code4SAFrankie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Same problem",
"This was patched on `master`, cannot reproduce anymore. Can you reproduce on `master`?",
"> This was patched on `master`, cannot reproduce anymore. Can you reproduce on `master`?\r\n\r\nI installed the newest version (3.4.0) by pip, but this problem still arises.",
"Facing the same issue with 3.5.1",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Fixed on version 4.0.1",
"transformers 4.0.1??"
] | 1,603 | 1,696 | 1,611 | NONE | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.1
- PyTorch version (GPU?): 1.4.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: No
### Who can help
No option for question answering in your list
## Information
question-answering pipeline example was used.
The problem arises when using:
the official example scripts
The tasks I am working on is:
question-answering
## To reproduce
Steps to reproduce the behavior:
1. Run
```
from transformers import pipeline
# From https://huggingface.co/transformers/usage.html
nlp = pipeline("question-answering")
context = r"""
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the `run_squad.py`.
"""
print(nlp(question="What is extractive question answering?", context=context))
print(nlp(question="What is a good example of a question answering dataset?", context=context))
```
2. Get error:
```
I1018 21:00:26.369285 15020 filelock.py:274] Lock 2310999501064 acquired on C:\Users\User/.cache\torch\transformers\c2341a51039a311cb3c7dc71b3d21970e6a127876f067f379f8bcd77ef870389.6a09face0659d64f93c9919f323e2ad4543ca9af5d2417b1bfb1a36f2f6b94a4.lock
Downloading: 100%|███████████████████████████████████████████████████████████████████████| 473/473 [00:00<00:00, 118kB/s]
I1018 21:00:27.404981 15020 filelock.py:318] Lock 2310999501064 released on C:\Users\User/.cache\torch\transformers\c2341a51039a311cb3c7dc71b3d21970e6a127876f067f379f8bcd77ef870389.6a09face0659d64f93c9919f323e2ad4543ca9af5d2417b1bfb1a36f2f6b94a4.lock
I1018 21:00:28.506618 15020 filelock.py:274] Lock 2310999482320 acquired on C:\Users\User/.cache\torch\transformers\cee054f6aafe5e2cf816d2228704e326446785f940f5451a5b26033516a4ac3d.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1.lock
Downloading: 100%|█████████████████████████████████████████████████████████████████████| 213k/213k [00:00<00:00, 299kB/s]
I1018 21:00:30.234217 15020 filelock.py:318] Lock 2310999482320 released on C:\Users\User/.cache\torch\transformers\cee054f6aafe5e2cf816d2228704e326446785f940f5451a5b26033516a4ac3d.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1.lock
I1018 21:00:31.294145 15020 filelock.py:274] Lock 2310999651048 acquired on C:\Users\User/.cache\torch\transformers\414816afc2ab8922d082f893dbf90bcb9a43f09838039249c6c8ca3e8b77921f.455d944f3d1572ab55ed579849f751cf37f303e3388980a42d94f7cd57a4e331.lock
Downloading: 100%|██████████████████████████████████████████████████████████████████████| 230/230 [00:00<00:00, 51.5kB/s]
I1018 21:00:32.335124 15020 filelock.py:318] Lock 2310999651048 released on C:\Users\User/.cache\torch\transformers\414816afc2ab8922d082f893dbf90bcb9a43f09838039249c6c8ca3e8b77921f.455d944f3d1572ab55ed579849f751cf37f303e3388980a42d94f7cd57a4e331.lock
I1018 21:00:33.984491 15020 filelock.py:274] Lock 2310999583152 acquired on C:\Users\User/.cache\torch\transformers\3efcb155a9475fe6b9318b8a8d5278bce1972d30291f97f2a8faeb50d02acabc.087b9fac49619019e540876a2d8ecb497884246b5aa8c9e8b7a0292cfbbe7c52.lock
Downloading: 100%|████████████████████████████████████████████████████████████████████| 261M/261M [00:42<00:00, 6.14MB/s]
I1018 21:01:16.665997 15020 filelock.py:318] Lock 2310999583152 released on C:\Users\User/.cache\torch\transformers\3efcb155a9475fe6b9318b8a8d5278bce1972d30291f97f2a8faeb50d02acabc.087b9fac49619019e540876a2d8ecb497884246b5aa8c9e8b7a0292cfbbe7c52.lock
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\spawn.py", line 105, in spawn_main
Traceback (most recent call last):
File "e:\Work\Python\transformers_question_answering.py", line 11, in <module>
exitcode = _main(fd)print(nlp(question="What is extractive question answering?", context=context))
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\spawn.py", line 114, in _main
File "E:\WPy-3710\python-3.7.1.amd64\lib\site-packages\transformers\pipelines.py", line 1691, in __call__
prepare(preparation_data)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\spawn.py", line 225, in prepare
for example in examples
_fixup_main_from_path(data['init_main_from_path']) File "E:\WPy-3710\python-3.7.1.amd64\lib\site-packages\transformers\pipelines.py", line 1691, in <listcomp>
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "E:\WPy-3710\python-3.7.1.amd64\lib\runpy.py", line 263, in run_path
for example in examples
File "E:\WPy-3710\python-3.7.1.amd64\lib\site-packages\transformers\data\processors\squad.py", line 345, in squad_convert_examples_to_features
pkg_name=pkg_name, script_name=fname)
File "E:\WPy-3710\python-3.7.1.amd64\lib\runpy.py", line 96, in _run_module_code
with Pool(threads, initializer=squad_convert_example_to_features_init, initargs=(tokenizer,)) as p:
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\context.py", line 119, in Pool
mod_name, mod_spec, pkg_name, script_name)
File "E:\WPy-3710\python-3.7.1.amd64\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "e:\Work\Python\transformers_question_answering.py", line 11, in <module>
print(nlp(question="What is extractive question answering?", context=context))
File "E:\WPy-3710\python-3.7.1.amd64\lib\site-packages\transformers\pipelines.py", line 1691, in __call__
for example in examples
File "E:\WPy-3710\python-3.7.1.amd64\lib\site-packages\transformers\pipelines.py", line 1691, in <listcomp>
for example in examples
File "E:\WPy-3710\python-3.7.1.amd64\lib\site-packages\transformers\data\processors\squad.py", line 345, in squad_convert_examples_to_features
with Pool(threads, initializer=squad_convert_example_to_features_init, initargs=(tokenizer,)) as p:
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\context.py", line 119, in Pool
context=self.get_context())context=self.get_context())
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\pool.py", line 177, in __init__
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\pool.py", line 177, in __init__
self._repopulate_pool()self._repopulate_pool()
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\pool.py", line 238, in _repopulate_pool
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\pool.py", line 238, in _repopulate_pool
self._wrap_exception)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\pool.py", line 257, in _repopulate_pool_static
self._wrap_exception)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\pool.py", line 257, in _repopulate_pool_static
w.start()
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\process.py", line 112, in start
w.start()
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)self._popen = self._Popen(self)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\context.py", line 322, in _Popen
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
return Popen(process_obj)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)reduction.dump(process_obj, to_child)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\reduction.py", line 60, in dump
_check_not_importing_main()
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
```
## Expected behavior
Output:
```
{'score': 0.622232091629833, 'start': 34, 'end': 96, 'answer': 'the task of extracting an answer from a text given a question.'}
{'score': 0.5115299158662765, 'start': 147, 'end': 161, 'answer': 'SQuAD dataset,'}
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7886/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7886/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7885 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7885/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7885/comments | https://api.github.com/repos/huggingface/transformers/issues/7885/events | https://github.com/huggingface/transformers/issues/7885 | 724,074,961 | MDU6SXNzdWU3MjQwNzQ5NjE= | 7,885 | [testing] the test suite is many times slower than 2 weeks ago | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please note that this is runtime on my machine and not CIs - so make sure you're evaluating the report relative to itself and not CI. Once the PR is merged we will start getting CI reports.\r\n\r\nTotal run time 4248s.\r\n\r\nSo it looks like on the torch-side `test_tokenization_fast.py` accounts for the main culprit adding up to 1300 secs. ~1/3rd of all test run.\r\n\r\nAnd the bulk of slowdown is tf tests.\r\n\r\n|time| tests|\r\n|-------|---------------------------------|\r\n|1300 | test_tokenization_fast.py|\r\n|1089 | the rest of torch tests|\r\n|1859 | tf tests|\r\n|-------------|-----|\r\n|4248 | Total|\r\n\r\nAnother thing I noticed `tests/test_modeling_marian.py` spends almost 40 secs in setup (11 x 3.3secs) - that's very slow:\r\n\r\n```\r\n3.95s setup tests/test_modeling_marian.py::TestMarian_FR_EN::test_batch_generation_fr_en\r\n3.57s setup tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_auto_config\r\n3.40s setup tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_batch_generation_en_ROMANCE_multi\r\n3.35s setup tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_batch_generation_en_de\r\n3.31s setup tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_tokenizer_handles_empty\r\n3.23s setup tests/test_modeling_marian.py::TestMarian_en_zh::test_batch_generation_eng_zho\r\n3.23s setup tests/test_modeling_marian.py::TestMarian_EN_FR::test_batch_generation_en_fr\r\n3.20s setup tests/test_modeling_marian.py::TestMarian_RU_FR::test_batch_generation_ru_fr\r\n3.13s setup tests/test_modeling_marian.py::TestMarian_MT_EN::test_batch_generation_mt_en\r\n3.12s setup tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_forward\r\n3.06s setup tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_pipeline\r\n```",
"I fixed marian in https://github.com/huggingface/transformers/pull/7888. Do you want to try to fix `test_tokenization_fast` or let somebody else?",
"> Do you want to try to fix test_tokenization_fast\r\n\r\nI will give it a go.\r\n\r\n**edit**: Except it was just removed by the merge that just happened. So I have to start from scratch.",
"https://github.com/huggingface/transformers/pull/7659 may have fixed the slow tokenization tests. Checking the most recent run it's back to ~2min for the torch-only job.\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/13951/workflows/244749ce-d1ee-488f-a59d-d891fbc38ed6/jobs/100800\r\nI will check a few more and close it if that was the culprit.\r\n\r\n",
"this test for some reason has `@slow` commented out - it takes 20+ seconds - can we put it back on?\r\nThis is not a test that tests functionality that is going to change much, so should be safe to turn it off for normal CIs. \r\nhttps://github.com/huggingface/transformers/blob/master/tests/test_tokenization_auto.py#L42\r\n\r\n```\r\npytest --durations=0 tests/test_tokenization_auto.py &\r\n22.15s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_pretrained\r\n4.42s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_identifier_non_existent\r\n2.75s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_model_type\r\n2.57s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_tokenizer_class\r\n2.36s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_pretrained_identifier\r\n2.06s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_from_pretrained_use_fast_toggle\r\n2.05s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_identifier_with_correct_config\r\n```",
"This one should probably also be `@slow` - all the other tests around it are `@slow`:\r\n```\r\n15.51s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_translation\r\n```",
"Wrote a one liner to calculate the sub-totals for whatever pattern in the output of `pytest --durations=0` stats, as in:\r\n```\r\n22.15s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_pretrained\r\n4.42s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_identifier_non_existent\r\n2.75s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_model_type\r\n2.57s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_tokenizer_class\r\n\r\n```\r\nTotal runtime:\r\n```\r\n$ cat stats.txt | perl -ne 's|^(.*?)s.|$x+=$1|e; END {print int $x}'\r\n3308\r\n```\r\nTotal tf runtime:\r\n```\r\ngrep _tf_ stats.txt | perl -ne 's|^(.*?)s.|$x+=$1|e; END {print int $x}'\r\n1609\r\n```",
"It this common test a good candidate for `@slow`?\r\n```\r\ngrep test_model_outputs_equivalence stats.txt | perl -ne 's|^(.*?)s.|$x+=$1|e; END {print int $x}'\r\n230\r\n```\r\nAt least a few of them are quite slow:\r\n```\r\n20.11s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_model_outputs_equivalence\r\n16.19s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_model_outputs_equivalence\r\n13.49s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_model_outputs_equivalence\r\n9.94s call tests/test_modeling_bert.py::BertModelTest::test_model_outputs_equivalence\r\n9.56s call tests/test_modeling_albert.py::AlbertModelTest::test_model_outputs_equivalence\r\n8.81s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_model_outputs_equivalence\r\n8.29s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_model_outputs_equivalence\r\n7.98s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_model_outputs_equivalence\r\n7.87s call tests/test_modeling_xlnet.py::XLNetModelTest::test_model_outputs_equivalence\r\n6.85s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_model_outputs_equivalence\r\n6.81s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_model_outputs_equivalence\r\n6.30s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_model_outputs_equivalence\r\n6.25s call tests/test_modeling_roberta.py::RobertaModelTest::test_model_outputs_equivalence\r\n5.90s call tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_model_outputs_equivalence\r\n5.81s call tests/test_modeling_electra.py::ElectraModelTest::test_model_outputs_equivalence\r\n5.79s call tests/test_modeling_distilbert.py::DistilBertModelTest::test_model_outputs_equivalence\r\n5.69s call tests/test_modeling_xlm.py::XLMModelTest::test_model_outputs_equivalence\r\n5.35s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_model_outputs_equivalence\r\n4.64s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_model_outputs_equivalence\r\n4.34s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_model_outputs_equivalence\r\n3.79s call tests/test_modeling_dpr.py::DPRModelTest::test_model_outputs_equivalence\r\n3.71s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_model_outputs_equivalence\r\n3.61s call tests/test_modeling_bart.py::BARTModelTest::test_model_outputs_equivalence\r\n3.58s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_model_outputs_equivalence\r\n3.57s call tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_model_outputs_equivalence\r\n3.53s call tests/test_modeling_ctrl.py::CTRLModelTest::test_model_outputs_equivalence\r\n3.40s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_outputs_equivalence\r\n3.31s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_model_outputs_equivalence\r\n3.19s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_model_outputs_equivalence\r\n3.12s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_model_outputs_equivalence\r\n2.98s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_model_outputs_equivalence\r\n2.93s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_model_outputs_equivalence\r\n2.80s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_model_outputs_equivalence\r\n2.59s call tests/test_modeling_longformer.py::LongformerModelTest::test_model_outputs_equivalence\r\n2.37s call tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_model_outputs_equivalence\r\n2.17s call tests/test_modeling_funnel.py::FunnelModelTest::test_model_outputs_equivalence\r\n2.13s call tests/test_modeling_fsmt.py::FSMTModelTest::test_model_outputs_equivalence\r\n2.02s call tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_model_outputs_equivalence\r\n1.94s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_model_outputs_equivalence\r\n1.70s call tests/test_modeling_t5.py::T5ModelTest::test_model_outputs_equivalence\r\n1.60s call tests/test_modeling_deberta.py::DebertaModelTest::test_model_outputs_equivalence\r\n1.44s call tests/test_modeling_lxmert.py::LxmertModelTest::test_model_outputs_equivalence\r\n1.22s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_model_outputs_equivalence\r\n1.11s call tests/test_modeling_reformer.py::ReformerLSHAttnModelTest::test_model_outputs_equivalence\r\n0.86s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_model_outputs_equivalence\r\n0.33s call tests/test_modeling_blenderbot.py::BlenderbotTesterMixin::test_model_outputs_equivalence\r\n```",
"Here is another possible candidate for `@slow`:\r\n\r\n```\r\ngrep test_torchscript stats.txt | perl -ne 's|^(.*?)s.|$x+=$1|e; END {print int $x}'\r\n289\r\n```\r\n\r\n```\r\n18.89s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_attentions\r\n18.65s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript\r\n11.37s call tests/test_modeling_bert.py::BertModelTest::test_torchscript_output_hidden_state\r\n11.02s call tests/test_modeling_bert.py::BertModelTest::test_torchscript_output_attentions\r\n9.69s call tests/test_modeling_electra.py::ElectraModelTest::test_torchscript\r\n8.90s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_hidden_state\r\n8.35s call tests/test_modeling_albert.py::AlbertModelTest::test_torchscript\r\n7.82s call tests/test_modeling_roberta.py::RobertaModelTest::test_torchscript\r\n7.78s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions\r\n7.71s call tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state\r\n7.71s call tests/test_modeling_electra.py::ElectraModelTest::test_torchscript_output_attentions\r\n7.68s call tests/test_modeling_albert.py::AlbertModelTest::test_torchscript_output_attentions\r\n7.65s call tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript\r\n7.37s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript\r\n7.12s call tests/test_modeling_roberta.py::RobertaModelTest::test_torchscript_output_attentions\r\n7.01s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_torchscript_output_hidden_state\r\n6.61s call tests/test_modeling_roberta.py::RobertaModelTest::test_torchscript_output_hidden_state\r\n6.51s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_torchscript_output_attentions\r\n5.48s call tests/test_modeling_distilbert.py::DistilBertModelTest::test_torchscript_output_hidden_state\r\n5.06s call tests/test_modeling_xlm.py::XLMModelTest::test_torchscript\r\n4.78s call tests/test_modeling_xlnet.py::XLNetModelTest::test_torchscript_output_attentions\r\n4.71s call tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions\r\n4.71s call tests/test_modeling_xlnet.py::XLNetModelTest::test_torchscript_output_hidden_state\r\n4.66s call tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state\r\n4.59s call tests/test_modeling_xlnet.py::XLNetModelTest::test_torchscript\r\n4.44s call tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions\r\n4.32s call tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions\r\n3.98s call tests/test_modeling_electra.py::ElectraModelTest::test_torchscript_output_hidden_state\r\n3.73s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state\r\n3.61s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_attentions\r\n3.55s call tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state\r\n3.55s call tests/test_modeling_dpr.py::DPRModelTest::test_torchscript\r\n3.54s call tests/test_modeling_bert.py::BertModelTest::test_torchscript\r\n3.50s call tests/test_modeling_albert.py::AlbertModelTest::test_torchscript_output_hidden_state\r\n3.46s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript\r\n3.36s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_hidden_state\r\n3.33s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript\r\n3.31s call tests/test_modeling_dpr.py::DPRModelTest::test_torchscript_output_attentions\r\n3.27s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript_output_attentions\r\n3.25s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript_output_hidden_state\r\n3.07s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions\r\n2.58s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state\r\n2.56s call tests/test_modeling_t5.py::T5ModelTest::test_torchscript_output_hidden_state\r\n2.50s call tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions\r\n2.30s call tests/test_modeling_t5.py::T5ModelTest::test_torchscript_output_attentions\r\n2.28s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_attentions\r\n2.19s call tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_torchscript_output_hidden_state\r\n2.13s call tests/test_modeling_distilbert.py::DistilBertModelTest::test_torchscript_output_attentions\r\n2.02s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_hidden_state\r\n1.87s call tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state\r\n1.82s call tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_torchscript_output_attentions\r\n1.78s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript\r\n1.69s call tests/test_modeling_bart.py::BARTModelTest::test_torchscript\r\n1.51s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript\r\n1.34s call tests/test_modeling_dpr.py::DPRModelTest::test_torchscript_output_hidden_state\r\n0.89s call tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript\r\n0.79s call tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_torchscript\r\n0.05s call tests/test_modeling_lxmert.py::LxmertModelTest::test_torchscript_output_attentions\r\n0.05s call tests/test_modeling_lxmert.py::LxmertModelTest::test_torchscript_output_hidden_state\r\n0.01s call tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_torchscript_output_attentions\r\n0.01s call tests/test_modeling_ctrl.py::CTRLModelTest::test_torchscript_output_attentions\r\n0.01s call tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_torchscript_output_hidden_state\r\n```",
"I am fine with marking any of these @slow , but don't care as much now that the issue is resolved.\r\n\r\nI do think we should have a repo-wide rule about what should be slow, and you have to write a comment if you want to override it.\r\n\r\n#### Proposed Rule for testing.rst\r\n\r\n+ All tests that take longer than 5s should be marked slow, that way we can save a lot of back and forth.\r\n+ For common tests to be marked slow, the slowest iteration of that common test must be > 15s.\r\n\r\n5s/15s was arbitrary, don't care much what the value is. WDYT @LysandreJik @sgugger ?\r\n\r\n",
"That's a fabulous suggestion!\r\n\r\nA few caveats for `testing.rst`:\r\n* < 5s should include model download overhead - my numbers above exclude this, so once we get the data from CI it'll be the true measurement.\r\n* 5s as measured on CI, since otherwise each hardware is different\r\n",
"While we are at it - there is a ton of very slow tf tests - I suppose the same rule applies there, right?",
"As much as I love moving quickly, we need to wait for others to agree to the rule before we apply it.\r\nMy proposed rule does not differentiate between tf and torch.",
"My communication wasn't clear - I meant to do that after we agreed on the threshold and regardless this PR https://github.com/huggingface/transformers/pull/7884 needs to be merged first to perform the correct measurements. \r\n\r\nI just saw that there was a **lot** of tf tests that were very slow and which were not marked as such, so I thought perhaps there was a special reason for them not to be `@slow`.",
"I'm not sure setting up a 5/15 or any specific time requirement on tests to classify them as slow would be best. Some tests, like the `test_model_outputs_equivalence` are important, and running them on contributors' PR when their changes affect the modeling internals is too.\r\n\r\nI think the following proposition would be more suited: \r\n\r\nif the test is focused on one of the library's internal components (e.g., modeling files, tokenization files, pipelines), then we should run that test in the non-slow test suite. If it's focused on an other aspect of the library, such as the documentation, the examples, then we should run these tests in the slow test suite. And then, to refine this approach we should have exceptions:\r\n\r\n- All tests that need a specific set of weights (e.g., model or tokenizer integration tests, pipeline integration tests) should be set to slow.\r\n- All tests that need to do a training (e.g, trainer integration tests) should be set to slow.\r\n- We can introduce exceptions if some of these should-be-non-slow tests are excruciatingly long, and set them to slow. Some examples are some auto modeling tests, which save and load large files to disk, which are set to slow.\r\n- Others?\r\n\r\nTo that end, we should aim for all the non-slow tests to cover entirely the different internals, while making sure that the tests keep a fast execution time. Having some very small models in the tests (e.g, 2 layers, 10 vocab size, etc.) helps in that regard, as does having dummy sets of weights like the `sshleifer/tiny-xxx-random` weights. On that front, there seems to be something fishy going on with the MobileBERT model, as it's supposed to be an efficient model but takes a while to be tested. There's probably something to do for this model.\r\n\r\nWilling to iterate on this wording, or specify/change some aspects if you think of something better.\r\n\r\nFollowing this approach:\r\n\r\nFor the [tokenization_auto tests](https://github.com/huggingface/transformers/issues/7885#issuecomment-711443513), we can definitely uncomment the `@slow`.\r\n\r\nFor the [MonoColumnInputTestCase](https://github.com/huggingface/transformers/issues/7885#issuecomment-711444043), we can also set it as a slow test.\r\n\r\n",
"I would change all things that need to do a training to all thing that need to do a real training.\r\nI spent a lot of time making a mock training fast for the tests of the Trainer and I don't want that marked as slow ;-)",
"OK, sounds like @stas00 can mark a few at slow.\r\nLongformer test also uses 5 layers for some reason, not sure if that matters.",
"@LysandreJik, just a clarification - so you propose not to have a fixed speed threshold in any of the \"clauses\". i.e. for non-essential tests as defined by you they should be marked as slow regardless of their speed, correct? I suppose this is smart since even very fast tests still add up to a lot since there could be many of them.",
"OK, so here is the full non-slow run's report on CI:\r\nhttps://pastebin.com/8pkaZKjH (quoted from [this report](https://circleci.com/api/v1.1/project/github/huggingface/transformers/101622/output/108/0?file=true&allocation-id=5f8db77a5d41f9372b3f92cf-0-build%2F6825DA7E))\r\n\r\nThe top slow ones are (cut off at 10sec):\r\n```\r\n131.69s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_train_pipeline_custom_model\r\n101.16s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_graph_mode\r\n79.24s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_compile_tf_model\r\n40.68s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_keras_save_load\r\n38.58s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_compile_tf_model\r\n35.05s call tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_pipeline\r\n32.99s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_train_pipeline_custom_model\r\n27.40s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_graph_mode\r\n26.53s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_compile_tf_model\r\n26.17s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_graph_mode\r\n25.97s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_model_outputs_equivalence\r\n20.57s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_train_pipeline_custom_model\r\n18.71s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_pt_tf_model_equivalence\r\n18.59s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_attentions\r\n17.73s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_translation\r\n17.72s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_train_pipeline_custom_model\r\n17.27s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript\r\n17.15s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_compile_tf_model\r\n16.72s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_hidden_state\r\n16.49s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_train_pipeline_custom_model\r\n16.20s call tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_no_configs\r\n15.91s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_pt_tf_model_equivalence\r\n15.64s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_compile_tf_model\r\n15.52s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_compile_tf_model\r\n15.36s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_pretokenized_inputs\r\n15.32s call tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs\r\n15.28s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_compile_tf_model\r\n15.28s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_model_outputs_equivalence\r\n15.24s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_pt_tf_model_equivalence\r\n15.14s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_maximum_encoding_length_pair_input\r\n14.94s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_train_pipeline_custom_model\r\n14.62s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_compile_tf_model\r\n14.34s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_graph_mode\r\n14.32s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_model_outputs_equivalence\r\n14.12s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_add_special_tokens\r\n13.75s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_compile_tf_model\r\n13.72s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_compile_tf_model\r\n13.67s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_graph_mode\r\n13.33s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_maximum_encoding_length_single_input\r\n13.12s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_attention_outputs\r\n11.84s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_save_load\r\n11.78s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_attention_outputs\r\n11.69s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_compile_tf_model\r\n11.56s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_attention_outputs\r\n11.54s call tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_train_pipeline_custom_model\r\n11.41s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_train_pipeline_custom_model\r\n11.40s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_train_pipeline_custom_model\r\n11.35s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_train_pipeline_custom_model\r\n11.30s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_train_pipeline_custom_model\r\n10.82s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_graph_mode\r\n10.77s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_save_load\r\n10.74s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_train_pipeline_custom_model\r\n10.71s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_graph_mode\r\n10.60s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_compile_tf_model\r\n10.57s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_save_load\r\n10.55s call tests/test_modeling_blenderbot.py::Blenderbot90MIntegrationTests::test_90_generation_from_short_input\r\n10.39s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_graph_mode\r\n10.24s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_graph_mode\r\n10.08s call tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs\r\n```\r\n\r\n@sshleifer, here is a highlight for you:\r\n```\r\n35.05s call tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_pipeline\r\n```\r\n\r\nOther slow torch tests by group:\r\n```\r\n18.59s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_attentions\r\n17.27s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript\r\n16.72s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_hidden_state\r\n\r\n17.73s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_translation\r\n\r\n15.36s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_pretokenized_inputs\r\n15.14s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_maximum_encoding_length_pair_input\r\n14.12s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_add_special_tokens\r\n13.33s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_maximum_encoding_length_single_input\r\n\r\n10.55s call tests/test_modeling_blenderbot.py::Blenderbot90MIntegrationTests::test_90_generation_from_short_input\r\n```",
"> @LysandreJik, just a clarification - so you propose not to have a fixed speed threshold in any of the \"clauses\". i.e. for non-essential tests as defined by you they should be marked as slow regardless of their speed, correct? I suppose this is smart since even very fast tests still add up to a lot since there could be many of them.\r\n\r\nYes, I think that would be best! I don't think there are many non-essential tests that are not slow though. We'd still like to get full coverage of the library's internals using only non-@slow tests, so getting these tests below a certain time threshold would still be important so that every PR could get quick feedback on the CI's status.",
"Thank you for that clarification, @LysandreJik\r\n\r\nPlease have a look at how your suggestions have been integrated into the testing doc:\r\nhttps://github.com/huggingface/transformers/pull/7895/files"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | We are going to have a CI-side running reports when this is merged https://github.com/huggingface/transformers/pull/7884, but we can already start looking at what caused a 4-5 times slowdown in the test suite about 10 days ago. I'm not sure the exact moment, but I checked a few reports and it appears that the change happened around Oct 8th +/- a few days.
e.g. before:
https://app.circleci.com/pipelines/github/huggingface/transformers/13323/workflows/5984ea0e-e280-4a41-bc4a-b4a3d72fc411/jobs/95699
after:
https://app.circleci.com/pipelines/github/huggingface/transformers/13521/workflows/d235c864-66fa-4408-a787-2efab850a781/jobs/97329
@sshleifer suggested a diagnostic to resolve this by adding a pytorch `--durations=N` flag, except if it's a missing `@slow` it won't work on my machine because I already have all the models pre-downloaded, so the following is just the slow execution:
Here is the report on my machine running all tests normally
```
$ pytest -n 3 --durations=0 tests
[...]
76.92s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_train_pipeline_custom_model
54.38s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_graph_mode
49.85s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_compile_tf_model
48.98s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_save_pretrained
44.11s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_compile_tf_model
38.42s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_graph_mode
35.94s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_tokenization_python_rust_equals
35.86s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_create_token_type_ids
35.81s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_embeded_special_tokens
35.58s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_max_length_equal
35.54s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_padding
35.36s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_is_fast
35.14s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_pretokenized_inputs
35.10s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_special_tokens_map_equal
35.07s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_num_special_tokens_to_add_equal
35.02s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_build_inputs_with_special_tokens
34.94s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_prepare_for_model
31.60s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_compile_tf_model
31.03s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_train_pipeline_custom_model
29.11s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_compile_tf_model
29.10s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_train_pipeline_custom_model
27.62s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_pt_tf_model_equivalence
26.36s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_compile_tf_model
25.12s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_save_pretrained
24.85s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_graph_mode
24.66s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_compile_tf_model
24.04s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_tokenization_python_rust_equals
23.15s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_pretrained
23.10s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_graph_mode
23.08s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_padding
22.99s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_compile_tf_model
22.78s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_train_pipeline_custom_model
22.69s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_keras_save_load
22.67s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_train_pipeline_custom_model
22.43s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_pretokenized_inputs
22.38s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_create_token_type_ids
22.35s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_prepare_for_model
22.28s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_attentions
22.25s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_max_length_equal
22.19s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_embeded_special_tokens
22.06s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_special_tokens_map_equal
21.95s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_is_fast
21.92s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_build_inputs_with_special_tokens
21.85s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_num_special_tokens_to_add_equal
21.61s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_graph_mode
21.49s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_add_special_tokens
21.32s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_add_tokens
21.21s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_alignement_methods
21.09s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_batch_encode_dynamic_overflowing
21.06s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_fast_only_inputs
20.95s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript
20.86s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_offsets_mapping
20.06s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_model_outputs_equivalence
20.01s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_train_pipeline_custom_model
19.62s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_attention_outputs
19.39s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
18.78s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_save_pretrained
18.63s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_train_pipeline_custom_model
18.36s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_compile_tf_model
18.08s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_tokenization_python_rust_equals
17.85s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_save_load
17.54s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_train_pipeline_custom_model
17.39s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_train_pipeline_custom_model
17.28s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_embeded_special_tokens
17.25s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_special_tokens_map_equal
16.88s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_train_pipeline_custom_model
16.84s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_graph_mode
16.74s call tests/test_modeling_electra.py::ElectraModelTest::test_torchscript
16.73s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_padding
16.63s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_max_length_equal
16.56s call tests/test_modeling_fsmt.py::FSMTModelTest::test_lm_head_model_random_beam_search_generate
16.55s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_hidden_state
16.53s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_build_inputs_with_special_tokens
16.53s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_is_fast
16.49s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_create_token_type_ids
16.45s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_num_special_tokens_to_add_equal
16.43s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_graph_mode
16.42s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_prepare_for_model
16.09s call tests/test_tokenization_albert.py::AlbertTokenizationTest::test_pretokenized_inputs
16.02s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_model_outputs_equivalence
15.80s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_train_pipeline_custom_model
15.50s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_translation
15.30s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_train_pipeline_custom_model
15.00s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_graph_mode
14.96s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_compile_tf_model
14.07s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_train_pipeline_custom_model
14.03s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_pt_tf_model_equivalence
13.77s call tests/test_modeling_rag.py::RagDPRT5Test::test_model_generate
13.29s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_hidden_states_output
13.10s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_model_outputs_equivalence
12.69s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_model_outputs_equivalence
12.43s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_save_pretrained
11.76s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_compile_tf_model
11.73s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_graph_mode
11.66s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_pt_tf_model_equivalence
11.63s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_special_tokens_map_equal
11.60s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_batch_encode_dynamic_overflowing
11.51s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_add_special_tokens
11.50s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_compile_tf_model
11.36s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_prepare_for_model
11.34s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_add_tokens
11.23s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_tokenization_python_rust_equals
11.19s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_fast_only_inputs
11.17s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_offsets_mapping
11.09s call tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_xla
11.05s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_alignement_methods
11.04s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_is_fast
10.95s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_offsets_with_special_characters
10.81s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_embeded_special_tokens
10.71s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_max_length_equal
10.59s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_build_inputs_with_special_tokens
10.59s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_num_special_tokens_to_add_equal
10.56s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_pt_tf_model_equivalence
10.42s call tests/test_modeling_bert.py::BertModelTest::test_torchscript_output_hidden_state
10.39s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_create_token_type_ids
10.36s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_compile_tf_model
10.34s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_attention_outputs
10.31s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_model_outputs_equivalence
10.25s call tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
10.15s call tests/test_modeling_bert.py::BertModelTest::test_torchscript_output_attentions
9.78s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_pt_tf_model_equivalence
9.76s call tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs
9.76s call tests/test_modeling_bert.py::BertModelTest::test_model_outputs_equivalence
9.72s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
9.50s call tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
9.37s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_pt_tf_model_equivalence
9.31s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_batch_encode_dynamic_overflowing
9.31s call tests/test_modeling_albert.py::AlbertModelTest::test_model_outputs_equivalence
9.30s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_save_load
9.10s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_model_outputs_equivalence
9.01s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_keras_save_load
8.88s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_add_tokens
8.81s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_offsets_mapping
8.80s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_keras_save_load
8.73s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_add_special_tokens
8.66s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_fast_only_inputs
8.60s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_pt_tf_model_equivalence
8.59s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_alignement_methods
8.57s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_model_outputs_equivalence
8.50s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_compile_tf_model
8.34s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_summarization
8.33s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_lm_head_model_random_beam_search_generate
8.17s call tests/test_modeling_albert.py::AlbertModelTest::test_torchscript
8.01s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_model_outputs_equivalence
7.96s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_model_outputs_equivalence
7.88s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_attention_outputs
7.87s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_graph_mode
7.84s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_attention_outputs
7.84s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
7.58s call tests/test_modeling_albert.py::AlbertModelTest::test_torchscript_output_attentions
7.57s call tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_train_pipeline_custom_model
7.51s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_save_load
7.47s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_attention_outputs
7.46s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_keyword_and_dict_args
7.36s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_save_load
7.36s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_determinism
7.32s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_graph_mode
7.23s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_attention_outputs
7.16s call tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_text_generation
7.05s call tests/test_modeling_electra.py::ElectraModelTest::test_torchscript_output_attentions
7.04s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_resize_token_embeddings
7.02s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_lm_head_model_random_beam_search_generate
6.90s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_train_pipeline_custom_model
6.88s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_hidden_states_output
6.82s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_save_load
6.73s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_lm_head_model_random_beam_search_generate
6.70s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_model_outputs_equivalence
6.60s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_save_load
6.57s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_keras_save_load
6.48s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_graph_mode
6.47s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_attention_outputs
6.44s call tests/test_modeling_encoder_decoder.py::GPT2EncoderDecoderModelTest::test_encoder_decoder_model_generate
6.44s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_inputs_embeds
6.35s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_compile_tf_model
6.25s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_compile_tf_model
6.25s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_lm_head_model_random_beam_search_generate
6.11s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_loss_computation
6.05s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript_output_attentions
5.98s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_save_load
5.93s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_train_pipeline_custom_model
5.77s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_compile_tf_model
5.72s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
5.69s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_torchscript_output_hidden_state
5.65s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_fast_only_inputs
5.64s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_multigpu_data_parallel_forward
5.60s call tests/test_modeling_blenderbot.py::Blenderbot90MIntegrationTests::test_90_generation_from_short_input
5.58s call tests/test_modeling_electra.py::ElectraModelTest::test_model_outputs_equivalence
5.57s call tests/test_modeling_rag.py::RagDPRBartTest::test_model_with_encoder_outputs
5.56s call tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
5.54s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_attention_outputs
5.54s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_padding
5.51s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_torchscript_output_attentions
5.46s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_alignement_methods
5.46s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_offsets_mapping
5.44s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_add_special_tokens
5.40s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_add_tokens
5.33s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_keras_save_load
5.30s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
5.27s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_graph_mode
5.22s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_hidden_states_output
5.20s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_graph_mode
5.12s call tests/test_pipelines.py::NerPipelineTests::test_tf_only_ner
5.11s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_hidden_state
5.10s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_attentions
5.09s call tests/test_modeling_electra.py::ElectraModelTest::test_torchscript_output_hidden_state
5.07s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_graph_mode
5.05s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_maximum_encoding_length_pair_input
5.04s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_graph_mode
5.01s call tests/test_modeling_distilbert.py::DistilBertModelTest::test_torchscript_output_hidden_state
4.99s call tests/test_modeling_fsmt.py::FSMTHeadTests::test_generate_fp16
4.94s call tests/test_modeling_distilbert.py::DistilBertModelTest::test_model_outputs_equivalence
4.93s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_hidden_states_output
4.90s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript_output_hidden_state
4.85s call tests/test_modeling_roberta.py::RobertaModelTest::test_torchscript
4.84s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_train_pipeline_custom_model
4.82s call tests/test_pipelines.py::QAPipelineTests::test_tf_question_answering
4.81s call tests/test_modeling_encoder_decoder.py::BertEncoderDecoderModelTest::test_save_and_load_from_encoder_decoder_pretrained
4.78s call tests/test_modeling_roberta.py::RobertaModelTest::test_torchscript_output_hidden_state
4.76s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_maximum_encoding_length_single_input
4.73s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_pretokenized_inputs
4.72s call tests/test_pipelines.py::QAPipelineTests::test_torch_question_answering
4.68s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_pt_tf_model_equivalence
4.63s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_add_special_tokens
4.60s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_save_load
4.59s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_model_outputs_equivalence
4.59s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_identifier_non_existent
4.57s call tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs
4.56s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_hidden_states_output
4.51s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_text2text
4.48s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_keras_save_load
4.48s call tests/test_tokenization_marian.py::MarianTokenizationTest::test_tokenizer_equivalence_en_de
4.47s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_keras_save_load
4.44s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_attention_outputs
4.43s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_hidden_states_output
4.41s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_keyword_and_dict_args
4.39s call tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_pipeline
4.34s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_save_load
4.33s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_hidden_states_output
4.28s call tests/test_tokenization_fsmt.py::FSMTTokenizationTest::test_pickle_tokenizer
4.28s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_text_generation
4.27s call tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
4.16s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_resize_token_embeddings
4.12s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_compile_tf_model
4.10s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_keras_save_load
4.09s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_train_pipeline_custom_model
4.09s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_save_load
4.08s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_head_pruning_integration
4.07s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_keras_save_load
4.06s call tests/test_modeling_roberta.py::RobertaModelTest::test_torchscript_output_attentions
4.04s call tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_fill_mask
3.96s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_determinism
3.96s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_pt_tf_model_equivalence
3.95s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_keras_save_load
3.95s setup tests/test_modeling_marian.py::TestMarian_FR_EN::test_batch_generation_fr_en
3.93s call tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_model_outputs_equivalence
3.88s call tests/test_pipelines.py::NerPipelineTests::test_ner_grouped
3.87s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_determinism
3.86s call tests/test_pipelines.py::NerPipelineTests::test_torch_ner
3.86s call tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_feature_extraction
3.85s call tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_sentiment_analysis
3.82s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
3.75s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_loss_computation
3.75s call tests/test_modeling_t5.py::T5ModelTest::test_export_to_onnx
3.73s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_lm_head_model_random_beam_search_generate
3.72s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_keras_save_load
3.70s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_fill_mask_with_targets
3.69s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_hidden_states_output
3.64s call tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
3.62s call tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_fill_mask_with_targets
3.60s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_feature_extraction
3.60s call tests/test_pipelines.py::NerPipelineTests::test_tf_ner
3.60s call tests/test_pipelines.py::ZeroShotClassificationPipelineTests::test_torch_zero_shot_classification
3.59s call tests/test_modeling_bert.py::BertModelTest::test_torchscript
3.58s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_sentiment_analysis
3.57s setup tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_auto_config
3.53s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_resize_token_embeddings
3.50s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_fill_mask
3.50s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_keras_save_load
3.50s call tests/test_modeling_dpr.py::DPRModelTest::test_torchscript
3.46s call tests/test_modeling_bart.py::BARTModelTest::test_tiny_model
3.46s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_inputs_embeds
3.44s call tests/test_modeling_albert.py::AlbertModelTest::test_torchscript_output_hidden_state
3.42s call tests/test_modeling_xlnet.py::XLNetModelTest::test_model_outputs_equivalence
3.42s call tests/test_pipelines.py::ZeroShotClassificationPipelineTests::test_tf_zero_shot_classification
3.42s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_pt_tf_model_equivalence
3.40s setup tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_batch_generation_en_ROMANCE_multi
3.40s call tests/test_tokenization_albert.py::AlbertTokenizationTest::test_maximum_encoding_length_pair_input
3.40s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_outputs_equivalence
3.39s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_keyword_and_dict_args
3.36s call tests/test_modeling_dpr.py::DPRModelTest::test_model_outputs_equivalence
3.35s setup tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_batch_generation_en_de
3.33s call tests/test_modeling_bart.py::BARTModelTest::test_model_outputs_equivalence
3.32s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_determinism
3.32s call tests/test_pipelines.py::NerPipelineTests::test_tf_ner_grouped
3.31s setup tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_tokenizer_handles_empty
3.30s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_resize_token_embeddings
3.29s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_pt_tf_model_equivalence
3.28s call tests/test_modeling_bert.py::BertModelTest::test_head_pruning_integration
3.28s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_model_outputs_equivalence
3.27s call tests/test_modeling_dpr.py::DPRModelTest::test_torchscript_output_attentions
3.26s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_model_outputs_equivalence
3.23s setup tests/test_modeling_marian.py::TestMarian_en_zh::test_batch_generation_eng_zho
3.23s setup tests/test_modeling_marian.py::TestMarian_EN_FR::test_batch_generation_en_fr
3.22s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript
3.20s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_attention_outputs
3.20s setup tests/test_modeling_marian.py::TestMarian_RU_FR::test_batch_generation_ru_fr
3.19s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_lm_head_model_random_no_beam_search_generate
3.18s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_keras_save_load
3.17s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_pt_tf_model_equivalence
3.15s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_graph_mode
3.13s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_pt_tf_model_equivalence
3.13s setup tests/test_modeling_marian.py::TestMarian_MT_EN::test_batch_generation_mt_en
3.12s call tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_compile_tf_model
3.12s setup tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_forward
3.11s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_model_outputs_equivalence
3.10s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_keyword_and_dict_args
```
I made a 3-sec cut-off for this listing.
@sshleifer, @sgugger, @LysandreJik, @thomwolf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7885/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7884 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7884/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7884/comments | https://api.github.com/repos/huggingface/transformers/issues/7884/events | https://github.com/huggingface/transformers/pull/7884 | 724,070,976 | MDExOlB1bGxSZXF1ZXN0NTA1NTE5ODIx | 7,884 | [CIs] report slow tests add --durations=0 to some pytest jobs | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | It appears that over the last 10 days or so the CIs have gone 4-5 times slower. From 2-3 minutes to 10-11 minutes for `run_tests_torch`.
This PR adds a report, generated by `pytest`, that lists the slowest tests so that we could quickly detect such regressions in our test suite's performance.
As suggested by @sshleifer, pytest can do the diagnostics using `--durations=` flag. I'm not sure what the best number to set it to - let's try with 50:
I propose adding this to:
* `run_tests_torch_and_tf` job - as it runs all non-slow tests in "real-time"
* all scheduled jobs that run slow tests
**edit**: Several fixes have been merged since this slowdown was found, so we are much better off and there is an effort to perform a major clean up here https://github.com/huggingface/transformers/pull/7895 so support this effort let's start with reporting the runtime of all tests, that is `--durations=0` (`pytest` currently doesn't have an option to report only tests slower than a certain run time) and once the clean up has been done, then we dial down to `--durations=50` to only monitor any new outliers.
@sshleifer, @sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7884/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7884",
"html_url": "https://github.com/huggingface/transformers/pull/7884",
"diff_url": "https://github.com/huggingface/transformers/pull/7884.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7884.patch",
"merged_at": 1603110195000
} |
https://api.github.com/repos/huggingface/transformers/issues/7883 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7883/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7883/comments | https://api.github.com/repos/huggingface/transformers/issues/7883/events | https://github.com/huggingface/transformers/pull/7883 | 723,997,273 | MDExOlB1bGxSZXF1ZXN0NTA1NDY1MDY5 | 7,883 | style: fix typo | {
"login": "rememberYou",
"id": 6253527,
"node_id": "MDQ6VXNlcjYyNTM1Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6253527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rememberYou",
"html_url": "https://github.com/rememberYou",
"followers_url": "https://api.github.com/users/rememberYou/followers",
"following_url": "https://api.github.com/users/rememberYou/following{/other_user}",
"gists_url": "https://api.github.com/users/rememberYou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rememberYou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rememberYou/subscriptions",
"organizations_url": "https://api.github.com/users/rememberYou/orgs",
"repos_url": "https://api.github.com/users/rememberYou/repos",
"events_url": "https://api.github.com/users/rememberYou/events{/privacy}",
"received_events_url": "https://api.github.com/users/rememberYou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7883/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7883",
"html_url": "https://github.com/huggingface/transformers/pull/7883",
"diff_url": "https://github.com/huggingface/transformers/pull/7883.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7883.patch",
"merged_at": 1603102493000
} |
https://api.github.com/repos/huggingface/transformers/issues/7882 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7882/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7882/comments | https://api.github.com/repos/huggingface/transformers/issues/7882/events | https://github.com/huggingface/transformers/pull/7882 | 723,996,658 | MDExOlB1bGxSZXF1ZXN0NTA1NDY0NjEx | 7,882 | style: fix typo in the README | {
"login": "rememberYou",
"id": 6253527,
"node_id": "MDQ6VXNlcjYyNTM1Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6253527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rememberYou",
"html_url": "https://github.com/rememberYou",
"followers_url": "https://api.github.com/users/rememberYou/followers",
"following_url": "https://api.github.com/users/rememberYou/following{/other_user}",
"gists_url": "https://api.github.com/users/rememberYou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rememberYou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rememberYou/subscriptions",
"organizations_url": "https://api.github.com/users/rememberYou/orgs",
"repos_url": "https://api.github.com/users/rememberYou/repos",
"events_url": "https://api.github.com/users/rememberYou/events{/privacy}",
"received_events_url": "https://api.github.com/users/rememberYou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7882/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7882",
"html_url": "https://github.com/huggingface/transformers/pull/7882",
"diff_url": "https://github.com/huggingface/transformers/pull/7882.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7882.patch",
"merged_at": 1603111406000
} |
https://api.github.com/repos/huggingface/transformers/issues/7881 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7881/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7881/comments | https://api.github.com/repos/huggingface/transformers/issues/7881/events | https://github.com/huggingface/transformers/issues/7881 | 723,986,375 | MDU6SXNzdWU3MjM5ODYzNzU= | 7,881 | Error(s) in loading state_dict for BertForTokenClassification | {
"login": "ShivanshuPurohit",
"id": 42869065,
"node_id": "MDQ6VXNlcjQyODY5MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/42869065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShivanshuPurohit",
"html_url": "https://github.com/ShivanshuPurohit",
"followers_url": "https://api.github.com/users/ShivanshuPurohit/followers",
"following_url": "https://api.github.com/users/ShivanshuPurohit/following{/other_user}",
"gists_url": "https://api.github.com/users/ShivanshuPurohit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShivanshuPurohit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShivanshuPurohit/subscriptions",
"organizations_url": "https://api.github.com/users/ShivanshuPurohit/orgs",
"repos_url": "https://api.github.com/users/ShivanshuPurohit/repos",
"events_url": "https://api.github.com/users/ShivanshuPurohit/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShivanshuPurohit/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello, I would say that there's a mismatch between the number of labels you've provided and the number of labels supported by your `model.pt`.\r\n\r\nHow did you obtain your `model.pt`?",
"I downloaded it directly from the links provided for each bert models with `!wget`",
"Which links? Why did you switch from using the `from_pretrained` utility to the `load_state_dict` torch util?",
"I had to use the same model for two tasks. Since the downloaded model using `.from_pretrained` method go to root/ directory in colab, they aren't directly accessible (I actually opened an issue about that as well [here](https://github.com/huggingface/transformers/issues/7847)). Thus, to use the model for both tasks, I downloaded it manually in a model/ directory using `!wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I had the case when we tried to provide our own tokenizer with vocab.txt file.\r\nThe mistake we did was that we update the config.json file and set vocab_size as the total line number in the new vocab.txt file.\r\n\r\nThis was the wrong step we took. config.json has to be the one exactly for model.pt we want to use. If the model.pt is from the hub, then no need to change the config.json.\r\n",
"This happened when i evaluating model on CPU while fine-tune model on GPU. It disappeared when i evaluate it on GPU.",
"> This happened when i evaluating model on CPU while fine-tune model on GPU. It disappeared when i evaluate it on GPU.\r\n\r\nHey @aakejiang, I am encountering the same issue, did you manage to fix it ? Thanks. "
] | 1,603 | 1,693 | 1,609 | NONE | null | ### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
examples/token-classification: @stefan-it
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* my own modified scripts: (give details below):
I am trying to run the following script
```
from transformers import BertTokenizer, BertConfig
from transformers import BertForTokenClassification, AdamW
import torch
import argparse
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
parser = argparse.ArgumentParser(description='BERT Keyword Extractor')
parser.add_argument('--sentence', type=str, default=' ',
help='sentence to get keywords')
parser.add_argument('--path', type=str, default='model.pt',
help='path to load model')
args = parser.parse_args()
tag2idx = {'B': 0, 'I': 1, 'O': 2}
tags_vals = ['B', 'I', 'O']
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
model = BertForTokenClassification.from_pretrained("bert-base-uncased", num_labels=len(tag2idx))
def keywordextract(sentence, path):
text = sentence
tkns = tokenizer.tokenize(text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tkns)
segments_ids = [0] * len(tkns)
tokens_tensor = torch.tensor([indexed_tokens]).to(device)
segments_tensors = torch.tensor([segments_ids]).to(device)
model.load_state_dict(torch.load(path))
model.eval()
prediction = []
logit = model(tokens_tensor, token_type_ids=None,
attention_mask=segments_tensors)
logit = logit.detach().cpu().numpy()
prediction.extend([list(p) for p in np.argmax(logit, axis=2)])
for k, j in enumerate(prediction[0]):
if j==1 or j==0:
print(tokenizer.convert_ids_to_tokens(tokens_tensor[0].to('cpu').numpy())[k], j)
keywordextract(args.sentence, args.path)
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* my own task or dataset: (give details below)
SemEval2017 task 10
## To reproduce
Steps to reproduce the behavior:
1. run this script with any string ("Huggingface is a blessing for transfer learning")
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The error is
```
Traceback (most recent call last):
File "keyword-extractor.py", line 40, in <module>
keywordextract(args.sentence, args.path)
File "keyword-extractor.py", line 28, in keywordextract
model.load_state_dict(torch.load(path))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1045, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for BertForTokenClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([2, 768]) from checkpoint, the shape in current model is torch.Size([3, 768]).
size mismatch for classifier.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([3]).
```
If I put in a model directly downloaded from huggingface archive, the error changes to
```
Traceback (most recent call last):
File "keyword-extractor.py", line 40, in <module>
keywordextract(args.sentence, args.path)
File "keyword-extractor.py", line 28, in keywordextract
model.load_state_dict(torch.load(path))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1045, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for BertForTokenClassification:
Missing key(s) in state_dict: "bert.embeddings.position_ids", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.output.LayerNorm.weight", "bert.encoder.layer.1.attention.output.LayerNorm.bias", "bert.encoder.layer.1.output.LayerNorm.weight", "bert.encoder.layer.1.output.LayerNorm.bias", "bert.encoder.layer.2.attention.output.LayerNorm.weight", "bert.encoder.layer.2.attention.output.LayerNorm.bias", "bert.encoder.layer.2.output.LayerNorm.weight", "bert.encoder.layer.2.output.LayerNorm.bias", "bert.encoder.layer.3.attention.output.LayerNorm.weight", "bert.encoder.layer.3.attention.output.LayerNorm.bias", "bert.encoder.layer.3.output.LayerNorm.weight", "bert.encoder.layer.3.output.LayerNorm.bias", "bert.encoder.layer.4.attention.output.LayerNorm.weight", "bert.encoder.layer.4.attention.output.LayerNorm.bias", "bert.encoder.layer.4.output.LayerNorm.weight", "bert.encoder.layer.4.output.LayerNorm.bias", "bert.encoder.layer.5.attention.output.LayerNorm.weight", "bert.encoder.layer.5.attention.output.LayerNorm.bias", "bert.encoder.layer.5.output.LayerNorm.weight", "bert.encoder.layer.5.output.LayerNorm.bias", "bert.encoder.layer.6.attention.output.LayerNorm.weight", "bert.encoder.layer.6.attention.output.LayerNorm.bias", "bert.encoder.layer.6.output.LayerNorm.weight", "bert.encoder.layer.6.output.LayerNorm.bias", "bert.encoder.layer.7.attention.output.LayerNorm.weight", "bert.encoder.layer.7.attention.output.LayerNorm.bias", "bert.encoder.layer.7.output.LayerNorm.weight", "bert.encoder.layer.7.output.LayerNorm.bias", "bert.encoder.layer.8.attention.output.LayerNorm.weight", "bert.encoder.layer.8.attention.output.LayerNorm.bias", "bert.encoder.layer.8.output.LayerNorm.weight", "bert.encoder.layer.8.output.LayerNorm.bias", "bert.encoder.layer.9.attention.output.LayerNorm.weight", "bert.encoder.layer.9.attention.output.LayerNorm.bias", "bert.encoder.layer.9.output.LayerNorm.weight", "bert.encoder.layer.9.output.LayerNorm.bias", "bert.encoder.layer.10.attention.output.LayerNorm.weight", "bert.encoder.layer.10.attention.output.LayerNorm.bias", "bert.encoder.layer.10.output.LayerNorm.weight", "bert.encoder.layer.10.output.LayerNorm.bias", "bert.encoder.layer.11.attention.output.LayerNorm.weight", "bert.encoder.layer.11.attention.output.LayerNorm.bias", "bert.encoder.layer.11.output.LayerNorm.weight", "bert.encoder.layer.11.output.LayerNorm.bias", "classifier.weight", "classifier.bias".
Unexpected key(s) in state_dict: "cls.predictions.bias", "cls.predictions.transform.dense.weight", "cls.predictions.transform.dense.bias", "cls.predictions.transform.LayerNorm.gamma", "cls.predictions.transform.LayerNorm.beta", "cls.predictions.decoder.weight", "cls.seq_relationship.weight", "cls.seq_relationship.bias", "bert.pooler.dense.weight", "bert.pooler.dense.bias", "bert.embeddings.LayerNorm.gamma", "bert.embeddings.LayerNorm.beta", "bert.encoder.layer.0.attention.output.LayerNorm.gamma", "bert.encoder.layer.0.attention.output.LayerNorm.beta", "bert.encoder.layer.0.output.LayerNorm.gamma", "bert.encoder.layer.0.output.LayerNorm.beta", "bert.encoder.layer.1.attention.output.LayerNorm.gamma", "bert.encoder.layer.1.attention.output.LayerNorm.beta", "bert.encoder.layer.1.output.LayerNorm.gamma", "bert.encoder.layer.1.output.LayerNorm.beta", "bert.encoder.layer.2.attention.output.LayerNorm.gamma", "bert.encoder.layer.2.attention.output.LayerNorm.beta", "bert.encoder.layer.2.output.LayerNorm.gamma", "bert.encoder.layer.2.output.LayerNorm.beta", "bert.encoder.layer.3.attention.output.LayerNorm.gamma", "bert.encoder.layer.3.attention.output.LayerNorm.beta", "bert.encoder.layer.3.output.LayerNorm.gamma", "bert.encoder.layer.3.output.LayerNorm.beta", "bert.encoder.layer.4.attention.output.LayerNorm.gamma", "bert.encoder.layer.4.attention.output.LayerNorm.beta", "bert.encoder.layer.4.output.LayerNorm.gamma", "bert.encoder.layer.4.output.LayerNorm.beta", "bert.encoder.layer.5.attention.output.LayerNorm.gamma", "bert.encoder.layer.5.attention.output.LayerNorm.beta", "bert.encoder.layer.5.output.LayerNorm.gamma", "bert.encoder.layer.5.output.LayerNorm.beta", "bert.encoder.layer.6.attention.output.LayerNorm.gamma", "bert.encoder.layer.6.attention.output.LayerNorm.beta", "bert.encoder.layer.6.output.LayerNorm.gamma", "bert.encoder.layer.6.output.LayerNorm.beta", "bert.encoder.layer.7.attention.output.LayerNorm.gamma", "bert.encoder.layer.7.attention.output.LayerNorm.beta", "bert.encoder.layer.7.output.LayerNorm.gamma", "bert.encoder.layer.7.output.LayerNorm.beta", "bert.encoder.layer.8.attention.output.LayerNorm.gamma", "bert.encoder.layer.8.attention.output.LayerNorm.beta", "bert.encoder.layer.8.output.LayerNorm.gamma", "bert.encoder.layer.8.output.LayerNorm.beta", "bert.encoder.layer.9.attention.output.LayerNorm.gamma", "bert.encoder.layer.9.attention.output.LayerNorm.beta", "bert.encoder.layer.9.output.LayerNorm.gamma", "bert.encoder.layer.9.output.LayerNorm.beta", "bert.encoder.layer.10.attention.output.LayerNorm.gamma", "bert.encoder.layer.10.attention.output.LayerNorm.beta", "bert.encoder.layer.10.output.LayerNorm.gamma", "bert.encoder.layer.10.output.LayerNorm.beta", "bert.encoder.layer.11.attention.output.LayerNorm.gamma", "bert.encoder.layer.11.attention.output.LayerNorm.beta", "bert.encoder.layer.11.output.LayerNorm.gamma", "bert.encoder.layer.11.output.LayerNorm.beta".
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
To extract the keywords from a string | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7881/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7880 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7880/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7880/comments | https://api.github.com/repos/huggingface/transformers/issues/7880/events | https://github.com/huggingface/transformers/pull/7880 | 723,964,030 | MDExOlB1bGxSZXF1ZXN0NTA1NDQwMzI1 | 7,880 | Fix bug in _sorted_checkpoints | {
"login": "shaie",
"id": 3469932,
"node_id": "MDQ6VXNlcjM0Njk5MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3469932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shaie",
"html_url": "https://github.com/shaie",
"followers_url": "https://api.github.com/users/shaie/followers",
"following_url": "https://api.github.com/users/shaie/following{/other_user}",
"gists_url": "https://api.github.com/users/shaie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shaie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaie/subscriptions",
"organizations_url": "https://api.github.com/users/shaie/orgs",
"repos_url": "https://api.github.com/users/shaie/repos",
"events_url": "https://api.github.com/users/shaie/events{/privacy}",
"received_events_url": "https://api.github.com/users/shaie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ooh, that's an ugly typo. Thanks for fixing! Will make sure to add a test of this soon so it doesn't break with me being stupid like that."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | I'm using transformers 3.3.1 and run a training script with `--save_total_limit 3`. I hit the exception below, and after debugging the code found that it wrongly tries to index into the `best_model_checkpoint`'s *str* rather than the `sorted_checkpoints` array. When running without the fix I got this exception:
```
Traceback (most recent call last):
File "/<HOME>/.conda/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 921, in _save_training
self._rotate_checkpoints(use_mtime=True)
File "/<HOME>/.conda/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1283, in _rotate_checkpoints
checkpoints_sorted = self._sorted_checkpoints(use_mtime=use_mtime)
File "/<HOME>/.conda/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1274, in _sorted_checkpoints
checkpoints_sorted[best_model_index],
TypeError: 'str' object does not support item assignment
```
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7880/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7880/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7880",
"html_url": "https://github.com/huggingface/transformers/pull/7880",
"diff_url": "https://github.com/huggingface/transformers/pull/7880.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7880.patch",
"merged_at": 1603194648000
} |
https://api.github.com/repos/huggingface/transformers/issues/7879 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7879/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7879/comments | https://api.github.com/repos/huggingface/transformers/issues/7879/events | https://github.com/huggingface/transformers/issues/7879 | 723,957,279 | MDU6SXNzdWU3MjM5NTcyNzk= | 7,879 | question about `add_special_tokens` and embedding layer | {
"login": "hyunwoongko",
"id": 38183241,
"node_id": "MDQ6VXNlcjM4MTgzMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/38183241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyunwoongko",
"html_url": "https://github.com/hyunwoongko",
"followers_url": "https://api.github.com/users/hyunwoongko/followers",
"following_url": "https://api.github.com/users/hyunwoongko/following{/other_user}",
"gists_url": "https://api.github.com/users/hyunwoongko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hyunwoongko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hyunwoongko/subscriptions",
"organizations_url": "https://api.github.com/users/hyunwoongko/orgs",
"repos_url": "https://api.github.com/users/hyunwoongko/repos",
"events_url": "https://api.github.com/users/hyunwoongko/events{/privacy}",
"received_events_url": "https://api.github.com/users/hyunwoongko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"When you add special tokens to the tokenizer, you should resize the model embedding matrix so that it contains the extra embeddings.\r\n\r\nYou should then do an additional training/fine-tuning on data containing those special tokens so that the model may tune this new column in the embedding matrix.",
"thanks !"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # ❓ Questions & Help
when I call add_special_tokens in the tokniezer, new token that is not in the existing vocab is added. At this time, the size is different from the existing embedding layer, what happens inside?
1. Abandon the existing embedding layer and train a new.
2. train only new tokens (expand existing embedding layer) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7879/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7878 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7878/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7878/comments | https://api.github.com/repos/huggingface/transformers/issues/7878/events | https://github.com/huggingface/transformers/pull/7878 | 723,901,939 | MDExOlB1bGxSZXF1ZXN0NTA1Mzg4MTc2 | 7,878 | [multiple models] skip saving/loading deterministic state_dict keys | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"OK, so how would you recommend we run a specific common test for a model whose test module isn't running common tests at all? \r\n\r\nPlease have a look at the approach I used for `mbart` https://github.com/huggingface/transformers/pull/7878/files#diff-f3b9af7b4721bdf972a72e35a35e8742ddc986f12218e728cd2b6ad30a156abb\r\n\r\nI basically made a very stripped down `ModelTester` and then used an alias hack to run just a specific common test via:\r\n```\r\nclass SelectiveCommonTest(unittest.TestCase):\r\n test_save_load_keys_to_never_save = ModelTesterMixin.test_save_load_keys_to_never_save\r\n```\r\nIs this any good, or would you recommend a different approach - the idea is not to copy the same test to each test module.\r\n",
"I like how you did it.\r\nAdding @LysandreJik and @sgugger to see if they agree before you do it 5 times.\r\n",
"> I imagine you double-checked the test was indeed executed.\r\n\r\nYes. \r\n\r\nIndeed, my initial test that I wrote for fsmt wasn't executing, since I renamed the keys in the code and forgot the test and the test was no-op if it didn't find the right keys. So some print()s helped to verify it indeed worked in this new incarnation.",
"I think `test_save_load_missing_keys` should be handled separately since it can add a significant overhead if run by all models, since I noticed tf models are **very** slow at save/load tests.. I reverted moving it from fsmt to common tests for now - back to just fsmt - will do a separate PR about it.\r\n",
"Anybody with a good knowledge of XLM? @sshleifer suggested to add xlm to this PR, but I'm not sure whether it's safe to not save `self.position_embeddings`, since as you can see here:\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/modeling_xlm.py#L439\r\nthe deterministic non-trained weights happen only if `config.sinusoidal_embeddings` is True. Otherwise it gets trained like any other embed layer.\r\n\r\nWhich leaves just: `position_ids` which is tiny a set of 512-2048 floats, which is not worth bothering for IMHO.\r\n\r\n",
"wrt t5 - I don't see anything positional or deterministic for that matter, but this is quite a different model - are there any state_dict keys here that would be a good candidate for not saving?",
"I implemented this for: mbart, pegasus, marian - all subclasses of `BartForConditionalGeneration` w/ `static_position_embeddings=True`. `fsmt` already had it.\r\n\r\nAnything else that I may have missed?\r\n\r\nIf not, this PR is good to go."
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | Implementing https://github.com/huggingface/transformers/issues/7296
* [x] make testing save/load of `keys_to_never_save` a common test by moving it from `fsmt` modeling test
* [x] implement skipping for all models with static positional embeddings: mbart, pegasus, marian (fsmt had this feature in first place)
Fixes: #7296
@LysandreJik, @sshleifer, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7878/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7878",
"html_url": "https://github.com/huggingface/transformers/pull/7878",
"diff_url": "https://github.com/huggingface/transformers/pull/7878.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7878.patch",
"merged_at": 1603281968000
} |
https://api.github.com/repos/huggingface/transformers/issues/7877 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7877/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7877/comments | https://api.github.com/repos/huggingface/transformers/issues/7877/events | https://github.com/huggingface/transformers/pull/7877 | 723,898,246 | MDExOlB1bGxSZXF1ZXN0NTA1Mzg1NDM3 | 7,877 | [wip] [pegasus] encode s/\r?\n/<n>/g + test | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have some concerns defaulting this logic to `True`, given that some `sshleifer/` models like `sshleifer/pegasus-cnn-ft-v2/`, `distill-pegasus-cnn-16-4/` were trained without newlines in the dataset. I'm sorry I forgot to mention these until now.\r\n\r\nI would merge this if the logic to do the regex were defaulted to False and we checked that `PegasusFastTokenizer` did the same thing.\r\nI would also be fine abandoning, since you have already solved the Pegasus replication issue with your `build/` scripts, and we haven't had anybody external ask for this yet.",
"Sure, we can just leave it unmerged and if someone reports an issue get back to sorting this out. It was a trivial change, so I have no problem with closing this. It's your call, @sshleifer.",
"will reopen if needed, thx for being flexible @stas00 "
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | As discussed in https://github.com/huggingface/transformers/issues/7743
* [x] On encode: `s/\r?\n/<n>/g` + test
* [ ] On decode: `s/<n>/\n/g` - not sure what to do since `run_eval.py` can't work with multiline outputs as it uses readline to get each record and inserting `\n` would break the records if they are now multiline.
@sshleifer
Fixes: #7743 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7877/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7877",
"html_url": "https://github.com/huggingface/transformers/pull/7877",
"diff_url": "https://github.com/huggingface/transformers/pull/7877.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7877.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7876 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7876/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7876/comments | https://api.github.com/repos/huggingface/transformers/issues/7876/events | https://github.com/huggingface/transformers/issues/7876 | 723,891,953 | MDU6SXNzdWU3MjM4OTE5NTM= | 7,876 | xlm-mlm-17-1280 & xlm-mlm-100-1280 include languages? | {
"login": "wmathor",
"id": 32392878,
"node_id": "MDQ6VXNlcjMyMzkyODc4",
"avatar_url": "https://avatars.githubusercontent.com/u/32392878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wmathor",
"html_url": "https://github.com/wmathor",
"followers_url": "https://api.github.com/users/wmathor/followers",
"following_url": "https://api.github.com/users/wmathor/following{/other_user}",
"gists_url": "https://api.github.com/users/wmathor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wmathor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wmathor/subscriptions",
"organizations_url": "https://api.github.com/users/wmathor/orgs",
"repos_url": "https://api.github.com/users/wmathor/repos",
"events_url": "https://api.github.com/users/wmathor/events{/privacy}",
"received_events_url": "https://api.github.com/users/wmathor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, you can find this data [here](https://github.com/facebookresearch/XLM#the-17-and-100-languages)."
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
Where is the list showing what languages are supported by each of these two models? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7876/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7876/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7875 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7875/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7875/comments | https://api.github.com/repos/huggingface/transformers/issues/7875/events | https://github.com/huggingface/transformers/issues/7875 | 723,861,490 | MDU6SXNzdWU3MjM4NjE0OTA= | 7,875 | How to do categorical sequence classification? | {
"login": "FerusAndBeyond",
"id": 18237033,
"node_id": "MDQ6VXNlcjE4MjM3MDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/18237033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FerusAndBeyond",
"html_url": "https://github.com/FerusAndBeyond",
"followers_url": "https://api.github.com/users/FerusAndBeyond/followers",
"following_url": "https://api.github.com/users/FerusAndBeyond/following{/other_user}",
"gists_url": "https://api.github.com/users/FerusAndBeyond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FerusAndBeyond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FerusAndBeyond/subscriptions",
"organizations_url": "https://api.github.com/users/FerusAndBeyond/orgs",
"repos_url": "https://api.github.com/users/FerusAndBeyond/repos",
"events_url": "https://api.github.com/users/FerusAndBeyond/events{/privacy}",
"received_events_url": "https://api.github.com/users/FerusAndBeyond/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Solved by adding `num_labels` to the constructor: `BertForSequenceClassification.from_pretrained(\"...\", num_labels=z)`. Then the onehot-encoding wasn't necessary either. Another error would probably make it easier to spot this though."
] | 1,602 | 1,603 | 1,603 | NONE | null | Whenever I have a sequence of max length y of batch size x: (x, y) and categorical (z) classification (x, z) I get the error that batch size isn't the same x vs x*z, how do I fix this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7875/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7874 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7874/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7874/comments | https://api.github.com/repos/huggingface/transformers/issues/7874/events | https://github.com/huggingface/transformers/issues/7874 | 723,852,173 | MDU6SXNzdWU3MjM4NTIxNzM= | 7,874 | [Rag] extend_enc_output fails when number of retrieved documents not equal to RagConfig.n_docs | {
"login": "lalitpagaria",
"id": 19303690,
"node_id": "MDQ6VXNlcjE5MzAzNjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/19303690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lalitpagaria",
"html_url": "https://github.com/lalitpagaria",
"followers_url": "https://api.github.com/users/lalitpagaria/followers",
"following_url": "https://api.github.com/users/lalitpagaria/following{/other_user}",
"gists_url": "https://api.github.com/users/lalitpagaria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lalitpagaria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lalitpagaria/subscriptions",
"organizations_url": "https://api.github.com/users/lalitpagaria/orgs",
"repos_url": "https://api.github.com/users/lalitpagaria/repos",
"events_url": "https://api.github.com/users/lalitpagaria/events{/privacy}",
"received_events_url": "https://api.github.com/users/lalitpagaria/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@lalitpagaria - good point! I think the cleanest solution here would actually be to add `n_docs` as a argument to all RagModel forward's function that defaults to `self.config.n_docs` (exactly as it is the case for `use_cache` or `output_attentions`) => it would be great if you could open a PR for this :-) ",
"@patrickvonplaten I think this approach also will not solve a situation when retriever return less docs than `n_docs`. For example let's say a dataset only have `4` documents but `n_docs` value is `5`. \r\n\r\nHow about solving this as follows -\r\n1) When fetching docs from retriever use `n_docs` value (either as a argument or default `self.config.n_docs`)\r\n2) All other places use value from number of documents returned by retriever ie dimension `0` of `context_input_ids` can be used\r\n\r\nLet me know what do you think.",
"I think this is an edge case - if a dataset only has 4 documents, then the user should not define `n_docs` as 5. I agree that it would be easier to just use the incoming tensor's dimension as `n_docs`, but it would be less readable and can also lead to confusion. `n_docs` should always be the number of docs actually given to Rag. So I'd prefer to not use the retriever's dimension here. We could add an assert that the dimension 0 is always equal `n_docs` which would be helpful for the user, but I think we should still rely on the `n_docs` parameter.",
"Sure I will create PR (using n_docs with assert) with related tests."
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Rag
The problem arises when using:
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: `dummy_dataset`
## To reproduce
Steps to reproduce the behavior:
1. Use caling retriever seperately example and but modify number of retrieved document parameter `n_docs` as follows -
```
docs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors="pt", n_docs=2)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Code snippet -
```
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True)
model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", use_dummy_dataset=True)
input_dict = tokenizer.prepare_seq2seq_batch("What is capital of France?", return_tensors="pt")
input_ids = input_dict["input_ids"]
# Caling retriever seperately
question_hidden_states = model.question_encoder(input_ids)[0]
# Default value of RagConfig.n_docs is 5 hence change n_docs to different than default value
docs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors="pt", n_docs=2)
doc_scores = torch.bmm(question_hidden_states.unsqueeze(1), docs_dict["retrieved_doc_embeds"].float().transpose(1, 2)).squeeze(1)
outputs = model.generate(context_input_ids=docs_dict["context_input_ids"], context_attention_mask=docs_dict["context_attention_mask"], doc_scores=doc_scores)
generated_string = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(generated_string)
```
Stacktrace -
```
/usr/local/lib/python3.6/dist-packages/transformers/modeling_rag.py in generate(self, input_ids, attention_mask, context_input_ids, context_attention_mask, doc_scores, max_length, min_length, early_stopping, use_cache, num_beams, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, bad_words_ids, num_return_sequences, decoder_start_token_id, **kwargs)
1353
1354 # correctly extend last_hidden_state and attention mask
-> 1355 context_attention_mask = extend_enc_output(context_attention_mask, num_beams=num_beams)
1356 encoder_outputs["last_hidden_state"] = extend_enc_output(last_hidden_state, num_beams=num_beams)
1357
/usr/local/lib/python3.6/dist-packages/transformers/modeling_rag.py in extend_enc_output(tensor, num_beams)
1346 def extend_enc_output(tensor, num_beams=None):
1347 # split into `batch_size`, `num_beams`, `num_docs`
-> 1348 tensor = tensor[None, None, :].reshape((batch_size, 1, self.config.n_docs) + tensor.shape[1:])
1349 # repeat same last hidden states over `num_beams` dimension
1350 tensor = tensor.expand((batch_size, num_beams, self.config.n_docs) + tensor.shape[3:])
RuntimeError: shape '[0, 1, 5, 300]' is invalid for input of size 600
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I think `extend_enc_output` should use tensor dimension instead of `self.config.n_docs` as follows -
```
tensor = tensor[None, None, :].reshape((batch_size, 1, tensor.shape[0:]) + tensor.shape[1:])
```
Let me know if proposed fix is fine, I can raise PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7874/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7873 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7873/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7873/comments | https://api.github.com/repos/huggingface/transformers/issues/7873/events | https://github.com/huggingface/transformers/issues/7873 | 723,836,451 | MDU6SXNzdWU3MjM4MzY0NTE= | 7,873 | can't set evaluation_strategy to "epoch" | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This has been done already, see [here](https://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py#L335). But you need to install from source for that, since the change is recent,",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.3.1
### Who can help
@sgugger
## Information
Seems like we can't use the evaluation_strategy=epoch option. The default value of evaluate_during_training is boolean (False). In transformers/training_args.py, __post_init__ function, there is a check - `if self.evaluate_during_training is not None:` (line 326). If I'm not mistaken, it will always result in True. I think that if you change the line into `if self.evaluate_during_training:` the problem will be solved.
Thank you so much for your awesome work :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7873/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7872 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7872/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7872/comments | https://api.github.com/repos/huggingface/transformers/issues/7872/events | https://github.com/huggingface/transformers/pull/7872 | 723,834,899 | MDExOlB1bGxSZXF1ZXN0NTA1MzM3NDg4 | 7,872 | Fix Rag example docstring | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7829
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7872/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7872",
"html_url": "https://github.com/huggingface/transformers/pull/7872",
"diff_url": "https://github.com/huggingface/transformers/pull/7872.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7872.patch",
"merged_at": 1602967608000
} |
https://api.github.com/repos/huggingface/transformers/issues/7871 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7871/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7871/comments | https://api.github.com/repos/huggingface/transformers/issues/7871/events | https://github.com/huggingface/transformers/issues/7871 | 723,826,629 | MDU6SXNzdWU3MjM4MjY2Mjk= | 7,871 | RAG generate function uses input_ids even when context_input_ids are given. | {
"login": "LittlePea13",
"id": 26126169,
"node_id": "MDQ6VXNlcjI2MTI2MTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/26126169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LittlePea13",
"html_url": "https://github.com/LittlePea13",
"followers_url": "https://api.github.com/users/LittlePea13/followers",
"following_url": "https://api.github.com/users/LittlePea13/following{/other_user}",
"gists_url": "https://api.github.com/users/LittlePea13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LittlePea13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LittlePea13/subscriptions",
"organizations_url": "https://api.github.com/users/LittlePea13/orgs",
"repos_url": "https://api.github.com/users/LittlePea13/repos",
"events_url": "https://api.github.com/users/LittlePea13/events{/privacy}",
"received_events_url": "https://api.github.com/users/LittlePea13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @LittlePea13 - yes you are very much correct here. This PR linked to the issue should fix it. Thanks a lot!"
] | 1,602 | 1,602 | 1,602 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-5.4.0-51-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
I think @patrickvonplaten has been checking RAG issues
## Information
Model I am using RagTokenForGeneration:
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
One can use the demo for RAG [currently in PR](https://github.com/huggingface/transformers/pull/7455) but it will happen in any case.
1. Load a RagTokenForGeneration model.
2. Generate the context_input_ids (in the demo done doing a forward pass.
3. use the generate function without giving `input_ids` which is supposed to be an optional input.
4. The function check the `batch_size` using `input_ids` and breaks since they are a None in this line: https://github.com/huggingface/transformers/blob/9f7b2b243230a0ff7e61f48f852e3a5b6f6d86fa/src/transformers/modeling_rag.py#L1311
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
query = "My question"
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
rag_conf = RagConfig.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", question_encoder_tokenizer = tokenizer.question_encoder, generator_tokenizer = tokenizer.generator, index_name="custom", indexed_dataset=dataset)
model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
device = "cuda:0"
input_ids = tokenizer(query, return_tensors="pt").input_ids.to(device)
with torch.no_grad():
# retrieve support docs
retrieved_outputs = model(input_ids, labels=None, output_retrieved=True)
dl_scores = retrieved_outputs.doc_scores[0].tolist()
dp_scores = retrieved_outputs.doc_scores.softmax(dim=-1)[0].tolist()
doc_dicts = retriever.index.get_doc_dicts(retrieved_outputs.retrieved_doc_ids)[0]
support_docs = [
{"score": ls, "proba": ns, "title": ti, "text": te}
for ls, ns, ti, te in zip(dl_scores, dp_scores, doc_dicts["title"], doc_dicts["text"])
]
# generate answers
generated_ids = model.generate(
context_input_ids=retrieved_outputs.context_input_ids,
context_attention_mask=retrieved_outputs.context_attention_mask,
doc_scores=retrieved_outputs.doc_scores,
num_beams=4,
num_return_sequences=4,
min_length=2,
max_length=64,
length_penalty=1.0,
)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The batch size should be obtained differently. For instance `batch_size = doc_scores.shape[0]`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7871/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7870 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7870/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7870/comments | https://api.github.com/repos/huggingface/transformers/issues/7870/events | https://github.com/huggingface/transformers/pull/7870 | 723,788,919 | MDExOlB1bGxSZXF1ZXN0NTA1MzAzOTM4 | 7,870 | Add missing comma | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7870/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7870",
"html_url": "https://github.com/huggingface/transformers/pull/7870",
"diff_url": "https://github.com/huggingface/transformers/pull/7870.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7870.patch",
"merged_at": 1603283053000
} |
https://api.github.com/repos/huggingface/transformers/issues/7869 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7869/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7869/comments | https://api.github.com/repos/huggingface/transformers/issues/7869/events | https://github.com/huggingface/transformers/pull/7869 | 723,751,147 | MDExOlB1bGxSZXF1ZXN0NTA1Mjc2NzUy | 7,869 | Raise error when using AMP on non-CUDA device | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> I think the postinit is a great place for this sanity check, so the error comes fast to the user, and I don't think our current FP16 implementations (Apex and Amp) work with TPUs. I can check the latter on Monday and report back here.\r\n\r\nIf you'd rather be more forgiving, let me know. In that case we can do a logging.warning(f\"AMP only supported when using CUDA. Turning off AMP because not using a CUDA device ({self.device.type})\") and subsequently automatically turning off --fp16. ",
"Ok, just tested on TPUs and FP16 does not work on them indeed, though native amp fails graciously (testing on TPUs with PyTorch < 1.6 is suicide so I did not try APEX). So if you fix the styling issue, I think this is good to be merged!\r\n",
"It has to be run by the user, I can force-push the styling on your branch if you don't have the setup to run it easily.",
"> It has to be run by the user, I can force-push the styling on your branch if you don't have the setup to run it easily.\r\n\r\nThanks but it should work fine now. Updated local dependencies and all that and reformatted.",
"Thanks for your contribution!"
] | 1,602 | 1,603 | 1,603 | COLLABORATOR | null | The trainer currently implements native AMP in such a way that a GradScaler is always used. AFAIK, and as supported by [this post](https://github.com/pytorch/pytorch/issues/36169#issuecomment-611261781) and [this report on the forums](https://discuss.huggingface.co/t/training-gpt2-on-cpus/1603), this will only work on CUDA devices. Therefore, an error should probably be thrown when a user tries to use `--fp16` alongside a non-CUDA device.
This PR is currently very brief because I am uncertain about a few things:
- I am not sure if the current implementation of `--fp16` in the trainer works with TPUs;
- I am not sure about differences in behaviour between using `apex` and native AMP in this respect;
- I am not entirely sure whether `_post_init` is the right place to do an argument sanity check. Particularly because I am now calling `self.device` which will set the device through `_set_devices`, even though you may (?) want to delay that as long as possible until the arguments are used in the trainer.
cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7869/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7869",
"html_url": "https://github.com/huggingface/transformers/pull/7869",
"diff_url": "https://github.com/huggingface/transformers/pull/7869.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7869.patch",
"merged_at": 1603137571000
} |
https://api.github.com/repos/huggingface/transformers/issues/7868 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7868/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7868/comments | https://api.github.com/repos/huggingface/transformers/issues/7868/events | https://github.com/huggingface/transformers/pull/7868 | 723,742,957 | MDExOlB1bGxSZXF1ZXN0NTA1MjcwOTc2 | 7,868 | Julibert model card | {
"login": "jordimas",
"id": 309265,
"node_id": "MDQ6VXNlcjMwOTI2NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/309265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jordimas",
"html_url": "https://github.com/jordimas",
"followers_url": "https://api.github.com/users/jordimas/followers",
"following_url": "https://api.github.com/users/jordimas/following{/other_user}",
"gists_url": "https://api.github.com/users/jordimas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jordimas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jordimas/subscriptions",
"organizations_url": "https://api.github.com/users/jordimas/orgs",
"repos_url": "https://api.github.com/users/jordimas/repos",
"events_url": "https://api.github.com/users/jordimas/events{/privacy}",
"received_events_url": "https://api.github.com/users/jordimas/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | Model card for Julibert Catalan model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7868/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7868",
"html_url": "https://github.com/huggingface/transformers/pull/7868",
"diff_url": "https://github.com/huggingface/transformers/pull/7868.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7868.patch",
"merged_at": 1603104653000
} |
https://api.github.com/repos/huggingface/transformers/issues/7867 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7867/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7867/comments | https://api.github.com/repos/huggingface/transformers/issues/7867/events | https://github.com/huggingface/transformers/pull/7867 | 723,719,220 | MDExOlB1bGxSZXF1ZXN0NTA1MjUzNzQ4 | 7,867 | Add AI-SOCO models | {
"login": "AliOsm",
"id": 7662492,
"node_id": "MDQ6VXNlcjc2NjI0OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7662492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AliOsm",
"html_url": "https://github.com/AliOsm",
"followers_url": "https://api.github.com/users/AliOsm/followers",
"following_url": "https://api.github.com/users/AliOsm/following{/other_user}",
"gists_url": "https://api.github.com/users/AliOsm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AliOsm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AliOsm/subscriptions",
"organizations_url": "https://api.github.com/users/AliOsm/orgs",
"repos_url": "https://api.github.com/users/AliOsm/repos",
"events_url": "https://api.github.com/users/AliOsm/events{/privacy}",
"received_events_url": "https://api.github.com/users/AliOsm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Adding AI-SOCO language and classification models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@julien-c. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7867/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7867",
"html_url": "https://github.com/huggingface/transformers/pull/7867",
"diff_url": "https://github.com/huggingface/transformers/pull/7867.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7867.patch",
"merged_at": 1603286684000
} |
https://api.github.com/repos/huggingface/transformers/issues/7866 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7866/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7866/comments | https://api.github.com/repos/huggingface/transformers/issues/7866/events | https://github.com/huggingface/transformers/issues/7866 | 723,712,682 | MDU6SXNzdWU3MjM3MTI2ODI= | 7,866 | AttributeError: module 'tensorflow_core.keras.activations' has no attribute 'swish' | {
"login": "Mandule",
"id": 44904099,
"node_id": "MDQ6VXNlcjQ0OTA0MDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/44904099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mandule",
"html_url": "https://github.com/Mandule",
"followers_url": "https://api.github.com/users/Mandule/followers",
"following_url": "https://api.github.com/users/Mandule/following{/other_user}",
"gists_url": "https://api.github.com/users/Mandule/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mandule/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mandule/subscriptions",
"organizations_url": "https://api.github.com/users/Mandule/orgs",
"repos_url": "https://api.github.com/users/Mandule/repos",
"events_url": "https://api.github.com/users/Mandule/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mandule/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> # ❓ Questions & Help\r\n> ## Details\r\n> tensorflow 2.0.0\r\n> after \"pip install transformers\", i import transformers, meet this error.\r\n> \r\n> **A link to original question on the forum/Stack Overflow**:\r\n\r\n兄弟,升级tf到2.3",
"Hi, you should install TensorFlow 2.3 as mentioned by @wmathor "
] | 1,602 | 1,603 | 1,603 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
tensorflow 2.0.0
after "pip install transformers", i import transformers, meet this error.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7866/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7865 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7865/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7865/comments | https://api.github.com/repos/huggingface/transformers/issues/7865/events | https://github.com/huggingface/transformers/issues/7865 | 723,680,517 | MDU6SXNzdWU3MjM2ODA1MTc= | 7,865 | labels and decoder_input_ids | {
"login": "AI678",
"id": 63541083,
"node_id": "MDQ6VXNlcjYzNTQxMDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/63541083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI678",
"html_url": "https://github.com/AI678",
"followers_url": "https://api.github.com/users/AI678/followers",
"following_url": "https://api.github.com/users/AI678/following{/other_user}",
"gists_url": "https://api.github.com/users/AI678/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI678/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI678/subscriptions",
"organizations_url": "https://api.github.com/users/AI678/orgs",
"repos_url": "https://api.github.com/users/AI678/repos",
"events_url": "https://api.github.com/users/AI678/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI678/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I also get confused by the notation and would appreciate some clarifications.\r\nThank you.",
"@AI678 I think I got it. Do not take my words as absolute truth since I am new to the subject.\r\nIf you are talking about a full Transformer architecture (e.g. BART, T5, PEGASUS), the labels are the token ids to which you compare the logits generated by the Decoder in order to compute the cross-entropy loss. This should be the only input necessary during training or fine tuning.\r\nOn the other hand, the decoder_input_ids have the exact same structure of the labels, but are shifted one position to the right in order to add the \\<start of sentence\\> token. The decoder_input_ids are then passed to the decoder (along with the mask, which masks all unseen tokens!!). The first input of the decoder will be the \\<start of sentence\\> token so that it starts generating the sentence.\r\n\r\nHope this help, waiting for somebody to acknowledge my answer.",
"Hi, the glossary has been completed with these terms, you can check it out [here](https://huggingface.co/transformers/glossary.html), let me know if it's still not clear enough."
] | 1,602 | 1,603 | 1,603 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
what is the difference between labels and decoder_input_ids for EncoderDecoderModel in the text summarization task?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7865/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/7865/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7864 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7864/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7864/comments | https://api.github.com/repos/huggingface/transformers/issues/7864/events | https://github.com/huggingface/transformers/pull/7864 | 723,625,184 | MDExOlB1bGxSZXF1ZXN0NTA1MTc4MDI2 | 7,864 | Create model card for pre-trained NLI models. | {
"login": "easonnie",
"id": 11016329,
"node_id": "MDQ6VXNlcjExMDE2MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/11016329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/easonnie",
"html_url": "https://github.com/easonnie",
"followers_url": "https://api.github.com/users/easonnie/followers",
"following_url": "https://api.github.com/users/easonnie/following{/other_user}",
"gists_url": "https://api.github.com/users/easonnie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/easonnie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/easonnie/subscriptions",
"organizations_url": "https://api.github.com/users/easonnie/orgs",
"repos_url": "https://api.github.com/users/easonnie/repos",
"events_url": "https://api.github.com/users/easonnie/events{/privacy}",
"received_events_url": "https://api.github.com/users/easonnie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Hi @julien-c, thank you so much for the pointer. I have added the dataset identifier and license.",
"Thanks! Your model is now linked from https://huggingface.co/datasets/anli (and the other datasets)"
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7864/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7864",
"html_url": "https://github.com/huggingface/transformers/pull/7864",
"diff_url": "https://github.com/huggingface/transformers/pull/7864.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7864.patch",
"merged_at": 1603523767000
} |
https://api.github.com/repos/huggingface/transformers/issues/7863 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7863/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7863/comments | https://api.github.com/repos/huggingface/transformers/issues/7863/events | https://github.com/huggingface/transformers/pull/7863 | 723,620,987 | MDExOlB1bGxSZXF1ZXN0NTA1MTc0ODgy | 7,863 | [testing] rename skip targets + docs | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I resolved conflicts - it's good to go now."
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | As discussed in https://github.com/huggingface/transformers/issues/6349 this PR brings consistency to skip decorators, so now we will have:
* `require_torch` - this test will run only under torch
* `require_torch_gpu` - as `require_torch` plus requires at least 1 GPU
* `require_torch_multigpu` - as `require_torch` plus requires at least 2 GPUs
* `require_torch_non_multigpu` - as `require_torch` plus requires 0 or 1 GPUs
* `require_torch_tpu` - as `require_torch` plus requires at least 1 TPU
Documentation updated and expanded.
The main change was done by running:
```
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|require_multigpu|require_torch_multigpu|g' {} \;
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|require_torch_and_cuda|require_torch_gpu|g' {} \;
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|require_non_multigpu|require_torch_non_multigpu|g' {} \;
```
Fixes: #6349
@LysandreJik, @sgugger, @sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7863/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7863",
"html_url": "https://github.com/huggingface/transformers/pull/7863",
"diff_url": "https://github.com/huggingface/transformers/pull/7863.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7863.patch",
"merged_at": 1603183153000
} |
https://api.github.com/repos/huggingface/transformers/issues/7862 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7862/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7862/comments | https://api.github.com/repos/huggingface/transformers/issues/7862/events | https://github.com/huggingface/transformers/issues/7862 | 723,618,699 | MDU6SXNzdWU3MjM2MTg2OTk= | 7,862 | unshared Albert | {
"login": "guowenying111",
"id": 47688495,
"node_id": "MDQ6VXNlcjQ3Njg4NDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/47688495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guowenying111",
"html_url": "https://github.com/guowenying111",
"followers_url": "https://api.github.com/users/guowenying111/followers",
"following_url": "https://api.github.com/users/guowenying111/following{/other_user}",
"gists_url": "https://api.github.com/users/guowenying111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guowenying111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guowenying111/subscriptions",
"organizations_url": "https://api.github.com/users/guowenying111/orgs",
"repos_url": "https://api.github.com/users/guowenying111/repos",
"events_url": "https://api.github.com/users/guowenying111/events{/privacy}",
"received_events_url": "https://api.github.com/users/guowenying111/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello! The layers are shared in ALBERT because even though there are `config.num_hidden_layers=n`, there's usually `config.num_hidden_groups = 1`. Change that value to `n` to have `n` different layers.",
"Thanks for your reply, I have another question about that\r\nHow can I load the AlbertModel parameter(in transformer block) to unshared Albert, When I load just \r\n\r\n###_IncompatibleKeys(missing_keys=['embeddings.position_ids'], unexpected_keys=[])###\r\nwhether is it wrong? can you tell me how I can load parameters correctly?\r\nlooking forward to your reply",
"Could you show me the full code used that generated this error? Or can you give an example so I can reproduce it?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,609 | 1,609 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
How can I remove ALBERT's parameter sharing during fine-tuning? (just use the albert as bert)
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7862/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7861 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7861/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7861/comments | https://api.github.com/repos/huggingface/transformers/issues/7861/events | https://github.com/huggingface/transformers/pull/7861 | 723,616,634 | MDExOlB1bGxSZXF1ZXN0NTA1MTcxNjkw | 7,861 | [testing] remove USE_CUDA | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | As discussed at https://github.com/huggingface/transformers/issues/6349 this PR removes `USE_CUDA`.
I don't think I needed to add `CUDA_VISIBLE_DEVICES=""` to `.circleci` config files since those CIs have no gpus anyway.
@LysandreJik, @sgugger, @sshleifer
fixes: #6349 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7861/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7861",
"html_url": "https://github.com/huggingface/transformers/pull/7861",
"diff_url": "https://github.com/huggingface/transformers/pull/7861.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7861.patch",
"merged_at": 1603105715000
} |
https://api.github.com/repos/huggingface/transformers/issues/7860 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7860/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7860/comments | https://api.github.com/repos/huggingface/transformers/issues/7860/events | https://github.com/huggingface/transformers/pull/7860 | 723,611,715 | MDExOlB1bGxSZXF1ZXN0NTA1MTY3OTYz | 7,860 | [fsmt test] basic config test with online model + super tiny model | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Stas, thanks, #7659 will fix this (we will now require at least one example checkpoint for each tokenizer and we test it automatically).",
"@thomwolf, I'm not certain why you closed this. This is a tokenizer test that is needed - the issue caught in examples was just a flag that there was a missing test in the normal tests.\r\n\r\nActually, not only it's needed, I will have to expand this test to verify that it **doesn't** get the hardcoded default values, but fetches the correct values from `tokenizer_config.json`. And to change the default values to be different from `TINY_FSMT`, since otherwise it won't be testing the right thing. \r\n\r\nFeel free to add it as part of https://github.com/huggingface/transformers/pull/7659 but please make sure you used different from `TINY_FSMT` hardcoded defaults.\r\n\r\nI hope this makes sense. ",
"Hmm, but you copied `TINY_FSMT` in https://github.com/huggingface/transformers/commit/1885ca7f29ffa5373ea848edad8efab02809c268, how will then the tests check it can fetch that data if this is now hardcoded? I am not following this. Do I need to create `TINY_FSMT2` with different values?\r\n\r\nI haven't read the new code in depth, but my gut feeling is that the defaults may mask a problem.",
"Hi stas, I'll let you read the new code and then we can have a look together.\r\n\r\nThe basic idea is that we now require a full and working checkpoint for the tokenizers to be fully tested in various conditions and the slow vs. fast compared.\r\n\r\nThe question of testing that tokenizers load and use `tokenizer_config.json` is another question and we should indeed address it in a subsequent PR if it's not addressed already indeed.",
"That works. \r\n\r\nBut please re-open this PR, since we need it anyway. I will add more changes to it after your big PR merge to ensure that the loading of the tokenizer is properly tested.",
"This PR is complete now."
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | This PR does:
* [x] Seeing the issue of https://github.com/huggingface/transformers/pull/7659 I realized fsmt didn't have a basic non-slow test that loads the tokenizer via online model files. Luckily a totally unrelated examples test caught this issue in that PR, so adding a very simple quick test in the main test suite, so that it runs by the normal CI.
* [x] while I was building this test, I needed a new tiny model, so I refined the script https://github.com/huggingface/transformers/blob/master/scripts/fsmt/fsmt-make-tiny-model.py and made a new one that creates a 50 times smaller model, so we are now at 60KB, instead of 3MB.
@LysandreJik, @sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7860/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7860",
"html_url": "https://github.com/huggingface/transformers/pull/7860",
"diff_url": "https://github.com/huggingface/transformers/pull/7860.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7860.patch",
"merged_at": 1603372495000
} |
https://api.github.com/repos/huggingface/transformers/issues/7859 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7859/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7859/comments | https://api.github.com/repos/huggingface/transformers/issues/7859/events | https://github.com/huggingface/transformers/pull/7859 | 723,517,249 | MDExOlB1bGxSZXF1ZXN0NTA1MDkwNzM4 | 7,859 | [s2s testing] turn all to unittests, use auto-delete temp dirs | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sshleifer, one thing I wasn't sure about - since now the temp dir creation/destruction is automated and it's easy to hardwire those by just passing the explicit path `self.get_auto_remove_tmp_dir(\"/tmp/favoritedir\")` when debugging - I removed the prefices that were used in `tempfile.mkdtemp`. Do you still want to have the option of having a specific prefix? With prefix you still have to hunt for the unique ID which changes on every test rerun. This is a much simpler option, IMHO.\r\n\r\nTo me it sounds it'd be useful during a debug to have one common path prefix for all tmp dirs of the same test and then add suffices instead for the different specific tmp dir of the test. ",
"Whatever you think is best! ",
"I think the prefixes are no longer needed. So it's good to go. "
] | 1,602 | 1,603 | 1,602 | CONTRIBUTOR | null | Currently in `examples/seq2seq/test_seq2seq_examples.py` many tmp dirs remain uncleaned by the end of the test:
This PR does:
* Now that we have `parameterized`, convert all tests into unittests
* Now that we have unittests, we can use auto-delete tmp dirs easily using `get_auto_remove_tmp_dir()`
* convert 4 test files
* moved out the broken multi_gpu test - working on it separately here https://github.com/huggingface/transformers/pull/7281 - will re-add when it's complete.
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7859/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7859",
"html_url": "https://github.com/huggingface/transformers/pull/7859",
"diff_url": "https://github.com/huggingface/transformers/pull/7859.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7859.patch",
"merged_at": 1602959602000
} |
https://api.github.com/repos/huggingface/transformers/issues/7858 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7858/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7858/comments | https://api.github.com/repos/huggingface/transformers/issues/7858/events | https://github.com/huggingface/transformers/pull/7858 | 723,510,842 | MDExOlB1bGxSZXF1ZXN0NTA1MDg1NDk4 | 7,858 | Trainer with Iterable Dataset | {
"login": "j-rossi-nl",
"id": 48321582,
"node_id": "MDQ6VXNlcjQ4MzIxNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/48321582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/j-rossi-nl",
"html_url": "https://github.com/j-rossi-nl",
"followers_url": "https://api.github.com/users/j-rossi-nl/followers",
"following_url": "https://api.github.com/users/j-rossi-nl/following{/other_user}",
"gists_url": "https://api.github.com/users/j-rossi-nl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/j-rossi-nl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/j-rossi-nl/subscriptions",
"organizations_url": "https://api.github.com/users/j-rossi-nl/orgs",
"repos_url": "https://api.github.com/users/j-rossi-nl/repos",
"events_url": "https://api.github.com/users/j-rossi-nl/events{/privacy}",
"received_events_url": "https://api.github.com/users/j-rossi-nl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Now investigating this FAILED test.\r\nSo far, it is PASSED on 3 different environments I have tried.\r\n`FAILED tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_add_special_tokens` from `run_tests_torch_and_tf`",
"@sgugger \r\n\r\nCircleCI `run_tests_torch_and_tf` is FAILED, because \r\n```\r\nFAILED tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_add_special_tokens\r\n==== 1 failed, 2964 passed, 493 skipped, 386 warnings in 925.76s (0:15:25) =====\r\n```\r\nOn 2 different environments, I have a PASSED.\r\nI reproduced the CI run step by step, and it is still PASSED:\r\n```\r\nconda create -n trans_test_tf_torch python=3.6 \r\nconda activate trans_test_tf_torch\r\ngit clone [email protected]:huggingface/transformers.git\r\ncd transformers\r\ngit fetch origin pull/7858/head:pull/7858\r\ngit pull origin pull/7858 \r\ngit checkout pull/7858\r\npip install --upgrade pip \r\npip install git+https://github.com/huggingface/datasets \r\npip install .[sklearn,tf-cpu,torch,testing] \r\npip install codecov pytest-cov \r\n```\r\n\r\nResult:\r\n`python -m pytest -n auto --dist=loadfile -s -v tests/test_tokenization_fast.py --cov`\r\n`91 passed, 11 warnings in 1851.05s (0:30:51)`\r\n\r\nFor all the tests:\r\n`python -m pytest -n auto --dist=loadfile -s -v ./tests/ --cov `\r\n`2965 passed, 493 skipped, 385 warnings in 3124.66s (0:52:04)`",
"Thanks for your contribution!"
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Fixes #5990
Follows #5995 (closed because stale).
Merges all commits from master.
* accomodate iterable dataset without predefined length
* set it as 1 use case: provide max_steps, and NO num_epochs
* Is a merge of master and PR 5995
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7858/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7858",
"html_url": "https://github.com/huggingface/transformers/pull/7858",
"diff_url": "https://github.com/huggingface/transformers/pull/7858.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7858.patch",
"merged_at": 1603123059000
} |
https://api.github.com/repos/huggingface/transformers/issues/7857 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7857/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7857/comments | https://api.github.com/repos/huggingface/transformers/issues/7857/events | https://github.com/huggingface/transformers/pull/7857 | 723,433,156 | MDExOlB1bGxSZXF1ZXN0NTA1MDE5NTk0 | 7,857 | Create README.md | {
"login": "hardyqr",
"id": 18531146,
"node_id": "MDQ6VXNlcjE4NTMxMTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/18531146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hardyqr",
"html_url": "https://github.com/hardyqr",
"followers_url": "https://api.github.com/users/hardyqr/followers",
"following_url": "https://api.github.com/users/hardyqr/following{/other_user}",
"gists_url": "https://api.github.com/users/hardyqr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hardyqr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hardyqr/subscriptions",
"organizations_url": "https://api.github.com/users/hardyqr/orgs",
"repos_url": "https://api.github.com/users/hardyqr/repos",
"events_url": "https://api.github.com/users/hardyqr/events{/privacy}",
"received_events_url": "https://api.github.com/users/hardyqr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null |
# What does this PR do?
Creare model card for `cambridgeltl/BioRedditBERT-uncased`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7857/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7857",
"html_url": "https://github.com/huggingface/transformers/pull/7857",
"diff_url": "https://github.com/huggingface/transformers/pull/7857.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7857.patch",
"merged_at": 1603284101000
} |
https://api.github.com/repos/huggingface/transformers/issues/7856 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7856/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7856/comments | https://api.github.com/repos/huggingface/transformers/issues/7856/events | https://github.com/huggingface/transformers/pull/7856 | 723,413,761 | MDExOlB1bGxSZXF1ZXN0NTA1MDAzODIz | 7,856 | Remove duplicated mish activation function | {
"login": "Razcle",
"id": 10298740,
"node_id": "MDQ6VXNlcjEwMjk4NzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/10298740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Razcle",
"html_url": "https://github.com/Razcle",
"followers_url": "https://api.github.com/users/Razcle/followers",
"following_url": "https://api.github.com/users/Razcle/following{/other_user}",
"gists_url": "https://api.github.com/users/Razcle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Razcle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Razcle/subscriptions",
"organizations_url": "https://api.github.com/users/Razcle/orgs",
"repos_url": "https://api.github.com/users/Razcle/repos",
"events_url": "https://api.github.com/users/Razcle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Razcle/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | # What does this PR do?
The mish activation function was repeated in the file. I removed the duplicate.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7856/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7856",
"html_url": "https://github.com/huggingface/transformers/pull/7856",
"diff_url": "https://github.com/huggingface/transformers/pull/7856.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7856.patch",
"merged_at": 1602970313000
} |
https://api.github.com/repos/huggingface/transformers/issues/7855 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7855/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7855/comments | https://api.github.com/repos/huggingface/transformers/issues/7855/events | https://github.com/huggingface/transformers/pull/7855 | 723,397,109 | MDExOlB1bGxSZXF1ZXN0NTA0OTg5Njgz | 7,855 | Model card for German BERT fine-tuned for LER/NER | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7855/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7855",
"html_url": "https://github.com/huggingface/transformers/pull/7855",
"diff_url": "https://github.com/huggingface/transformers/pull/7855.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7855.patch",
"merged_at": 1603283501000
} |
https://api.github.com/repos/huggingface/transformers/issues/7854 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7854/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7854/comments | https://api.github.com/repos/huggingface/transformers/issues/7854/events | https://github.com/huggingface/transformers/pull/7854 | 723,394,539 | MDExOlB1bGxSZXF1ZXN0NTA0OTg3NjA2 | 7,854 | Fixing issue #7810 | {
"login": "lucadiliello",
"id": 23355969,
"node_id": "MDQ6VXNlcjIzMzU1OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucadiliello",
"html_url": "https://github.com/lucadiliello",
"followers_url": "https://api.github.com/users/lucadiliello/followers",
"following_url": "https://api.github.com/users/lucadiliello/following{/other_user}",
"gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions",
"organizations_url": "https://api.github.com/users/lucadiliello/orgs",
"repos_url": "https://api.github.com/users/lucadiliello/repos",
"events_url": "https://api.github.com/users/lucadiliello/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucadiliello/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,602 | 1,614 | 1,614 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # 7810
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@LysandreJik you suggested to open this PR here: https://github.com/huggingface/transformers/issues/7810
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7854/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7854",
"html_url": "https://github.com/huggingface/transformers/pull/7854",
"diff_url": "https://github.com/huggingface/transformers/pull/7854.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7854.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7853 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7853/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7853/comments | https://api.github.com/repos/huggingface/transformers/issues/7853/events | https://github.com/huggingface/transformers/issues/7853 | 723,293,229 | MDU6SXNzdWU3MjMyOTMyMjk= | 7,853 | SequenceSummary class in modeling_utils.py | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | Hello,
I have a question about the documentation strings provided for the forward function of the SequenceSummary class from modeling_utils.py:
https://github.com/huggingface/transformers/blob/dc552b9b7025ea9c38717f30ad3d69c2a972049d/src/transformers/modeling_utils.py#L1484
So when `cls_index` is not specified as the argument in SequenceSummary() statement, is the last token of the sequence used for the classification task? The entire sentence for the description is somewhat awkward...
Thanks, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7853/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7852 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7852/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7852/comments | https://api.github.com/repos/huggingface/transformers/issues/7852/events | https://github.com/huggingface/transformers/pull/7852 | 723,202,277 | MDExOlB1bGxSZXF1ZXN0NTA0ODI4NTk3 | 7,852 | Upgrade PyTorch Lightning to 1.0.2 | {
"login": "SeanNaren",
"id": 6707363,
"node_id": "MDQ6VXNlcjY3MDczNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6707363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SeanNaren",
"html_url": "https://github.com/SeanNaren",
"followers_url": "https://api.github.com/users/SeanNaren/followers",
"following_url": "https://api.github.com/users/SeanNaren/following{/other_user}",
"gists_url": "https://api.github.com/users/SeanNaren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SeanNaren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SeanNaren/subscriptions",
"organizations_url": "https://api.github.com/users/SeanNaren/orgs",
"repos_url": "https://api.github.com/users/SeanNaren/repos",
"events_url": "https://api.github.com/users/SeanNaren/events{/privacy}",
"received_events_url": "https://api.github.com/users/SeanNaren/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sshleifer @julien-c \r\nmind helping land this?\r\n\r\nthis upgrades HF to use lightning 1.0+ \r\n\r\nthe 1.0 version has the stable API along with major fixes (many of the open issues with lightning and HF are fixed here). \r\n\r\nFinally, a few notes about the changes with the earlier versions (earliest versions) of lightning:\r\n\r\n1. models are decoupled from data (but still supported for research purposes). \r\n2. lightningModules for the HF case are likely better used to pass in models to be trained in a specific way or even easier defined as a lightningmodule to modifying and subclassing can be easier. \r\n3. all LMs now offer first class support for exporting to onnx and torchscript.\r\n\r\nthanks for the help!",
"Thanks for the contribution!\r\nI updated (pushed to) your branch to include some new multi-gpu tests that are on master from @stas00.\r\n\r\nFor hf tests marked `@slow`, circleci won't run them, so you should run:\r\n\r\n```\r\nRUN_SLOW=1 pytest/examples/seq2seq/finetune.py\r\nRUN_SLOW=1 pytest/examples/seq2seq/test_bash_script.py # I added --do_predict to this to document another issue. This is failing on master, so feel free to remove it if you can't get it working\r\n```\r\nOn a machine with >1 GPUs, run:\r\n\r\n```bash\r\nRUN_SLOW=1 pytest/examples/seq2seq/test_seq2seq_examples_multi_gpu.py\r\n```\r\n\r\n\r\nWe have had some issues getting `trainer.test` working in DDP (@nateraw thinks our usage of `hparams` is the culprit). I can't replicate the issue with the PL `BoringModel`, but you may run into it now. Would love to know if you see anything, but if it's unfixable, we can remove `--do_predict` from test_bash_script.py.\r\n\r\n\r\n",
"Thanks @sshleifer will investigate :)",
"I've made a few changes in the code and the tests are passing!\r\n\r\nRegarding the barrier I don't think it should stay like this, but its necessary due to the test logic after picking the best model. Current functionality means testing happens across all GPUs, in sync.\r\n\r\nThe save function in PL probably should add a barrier to ensure all processes are in sync when saving to prevent further issues, and I can open a followup issue to handle this (cc @williamFalcon)\r\n\r\n@sshleifer could you verify things work from your end as well if poss?",
"@sshleifer @julien-c any chance we can land this? everything is passing and things are fixed! \r\n\r\n:)",
"@SeanNaren is\r\n> Running into other failing tests, this might be the pytorch 1.7 upgrade...\r\n\r\nresolved?",
"cc @stas00, @patil-suraj for awareness.",
"@sshleifer yeah just needed to merge master",
"Thanks @SeanNaren for your work on this. It is very much appreciated!",
"I propose an easy solution for such changes - in addition to updating the requirement file, we could add a run-time version check at the top of `lightning_base.py`, which gets adjusted when a breaking change is done like in this PR.",
"Incidentally this upgrade solves the problem I have been battling since last night.\r\n\r\na test using PL w/ 1 gpu, followed by a test using 2 gpus was failing with (same pytest process):\r\n\r\n```\r\npytorch_lightning.utilities.exceptions.MisconfigurationException:\r\n You requested GPUs: [0, 1]\r\n But your machine only has: [0]\r\n```\r\nafter this update, it's gone! Clearly it was some bug in PL that kept some stale state.\r\n",
"Thank you guys for this awesome repo :)\r\n\r\n@stas00 that's a neat solution, we're hoping to assist out on doing some small refactors to the lightning code to make it simpler and easier whilst being less intrusive. More to follow in time and we'll keep you guys in the loop",
"How hard would it be to get the validation metrics sync'd/averaged between processes?\r\n\r\nI posted an issue a while back on your repo where checkpoint saving was not making decisions based on averaged metrics, and I know you guys have made some progress there.\r\n\r\nI'm specifically interested in `examples/seq2seq/finetune.py` which uses [this function](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/callbacks.py#L87) to build a `ModelCheckpoint` callback.\r\n\r\nEven if it were just a simple average across processes of the relevant metric this would be very valuable to us.",
"> How hard would it be to get the validation metrics sync'd/averaged between processes?\r\n\r\nThe new Metrics API in Lightning 1.0 supports distributed syncing out of the box (doesn't even require lightning to work). Any metrics you need can definitely be [implemented](https://pytorch-lightning.readthedocs.io/en/latest/metrics.html#implementing-a-metric) fairly easily.\r\n",
"Are there any documented examples where `dist_reduce_fx=\"mean\"`?",
"@sshleifer Right now I can't find any of our current metrics using ``dist_reduce_fx=\"mean\"``. Is there a specific issue you are running into?",
"Since I already have code to compute all the metrics I want, I am trying to make a class that just gathers the computed metrics from all processes and averages whatever it is sent. (Maybe in future I will try to overweight ranks that processed more data.)\r\nMy issue is that the type of value sent to update has to be coerced to a `torch.tensor` on the correct device, which I have not figured out yet. My attempt at\r\nhttps://github.com/huggingface/transformers/pull/8269\r\n\r\nraises `RuntimeError: Tensors must be CUDA and dense` during the `allgather` step.\r\n\r\n(pl 1.0.4, torch 1.6)",
"Will take a look @sshleifer :)"
] | 1,602 | 1,604 | 1,603 | CONTRIBUTOR | null | Updates Pytorch Lightning to 1.0.2, and moves early stopping callback into the general callbacks due to deprecation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7852/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7852",
"html_url": "https://github.com/huggingface/transformers/pull/7852",
"diff_url": "https://github.com/huggingface/transformers/pull/7852.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7852.patch",
"merged_at": 1603911555000
} |
https://api.github.com/repos/huggingface/transformers/issues/7851 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7851/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7851/comments | https://api.github.com/repos/huggingface/transformers/issues/7851/events | https://github.com/huggingface/transformers/pull/7851 | 723,198,276 | MDExOlB1bGxSZXF1ZXN0NTA0ODI1Mzcx | 7,851 | Trainer accepts iterable datasets | {
"login": "j-rossi-nl",
"id": 48321582,
"node_id": "MDQ6VXNlcjQ4MzIxNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/48321582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/j-rossi-nl",
"html_url": "https://github.com/j-rossi-nl",
"followers_url": "https://api.github.com/users/j-rossi-nl/followers",
"following_url": "https://api.github.com/users/j-rossi-nl/following{/other_user}",
"gists_url": "https://api.github.com/users/j-rossi-nl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/j-rossi-nl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/j-rossi-nl/subscriptions",
"organizations_url": "https://api.github.com/users/j-rossi-nl/orgs",
"repos_url": "https://api.github.com/users/j-rossi-nl/repos",
"events_url": "https://api.github.com/users/j-rossi-nl/events{/privacy}",
"received_events_url": "https://api.github.com/users/j-rossi-nl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there, thanks for the PR! Unfortunately it's not based on current master, so we can't review properly as it suggests changes from code that does not exist anymore.",
"Some merging work to be done."
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | Fixes #5990
Follows PR #5995 (got closed for being stale...)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@sgugger @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7851/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7851",
"html_url": "https://github.com/huggingface/transformers/pull/7851",
"diff_url": "https://github.com/huggingface/transformers/pull/7851.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7851.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7850 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7850/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7850/comments | https://api.github.com/repos/huggingface/transformers/issues/7850/events | https://github.com/huggingface/transformers/issues/7850 | 723,168,609 | MDU6SXNzdWU3MjMxNjg2MDk= | 7,850 | OperatorNotAllowedInGraphError in dbmdz/bert-base-italian-cased for Token Classification | {
"login": "fra-luc",
"id": 44058367,
"node_id": "MDQ6VXNlcjQ0MDU4MzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/44058367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fra-luc",
"html_url": "https://github.com/fra-luc",
"followers_url": "https://api.github.com/users/fra-luc/followers",
"following_url": "https://api.github.com/users/fra-luc/following{/other_user}",
"gists_url": "https://api.github.com/users/fra-luc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fra-luc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fra-luc/subscriptions",
"organizations_url": "https://api.github.com/users/fra-luc/orgs",
"repos_url": "https://api.github.com/users/fra-luc/repos",
"events_url": "https://api.github.com/users/fra-luc/events{/privacy}",
"received_events_url": "https://api.github.com/users/fra-luc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This might be of interest to @jplu ",
"Hello! \n\nFor now you cannot use the `.fit()` method to train a TF model. This will be possible in a next release.\n\nIf you want to train a NER model please use the TF trainer.",
"Thanks for your answer, I will try TF Trainer",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-5.4.0-1028-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@LysandreJik @stefan-it
## Information
Model I am using (Bert, XLNet ...):
Bert (dbmdz/bert-base-italian-cased)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I am following the tutorial in https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities in order to fine-tune a custom Named Entity Recognition NN.
1. Load the tokenizer and tokenize the data
```
tokenizer = BertTokenizerFast.from_pretrained('dbmdz/bert-base-italian-cased')
```
following the steps from [tutorial](https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities) to obtain `train_dataset` and `val_dataset` .
2. Load the Token Classifier with pretrained weights from the model
```
from transformers import TFBertForTokenClassification
model = TFBertForTokenClassification.from_pretrained('dbmdz/bert-base-italian-cased', num_labels=len(unique_tags))
```
As a side note, this gives the following warning which I understand is to be expected since the loaded model was not fine tuned for Token Classification:
```
Some weights of the model checkpoint at dbmdz/bert-base-italian-cased were not used when initializing TFBertForTokenClassification: ['mlm___cls', 'nsp___cls']
- This IS expected if you are initializing TFBertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing TFBertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of TFBertForTokenClassification were not initialized from the model checkpoint at dbmdz/bert-base-italian-cased and are newly initialized: ['dropout_37', 'classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
4. Train
```
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss=model.compute_loss)
model.fit(train_dataset.shuffle(100).batch(16), epochs=3, batch_size=16)
```
This raises the following error:
```
Epoch 1/3
---------------------------------------------------------------------------
OperatorNotAllowedInGraphError Traceback (most recent call last)
<ipython-input-23-5eb528629965> in <module>
1 optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
2 model.compile(optimizer=optimizer, loss=model.compute_loss)
----> 3 model.fit(train_dataset.shuffle(100).batch(16), epochs=3, batch_size=16)
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
64 def _method_wrapper(self, *args, **kwargs):
65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 66 return method(self, *args, **kwargs)
67
68 # Running inside `run_distribute_coordinator` already.
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
851 batch_size=batch_size):
852 callbacks.on_train_batch_begin(step)
--> 853 tmp_logs = train_function(iterator)
854 # Catch OutOfRangeError for Datasets of unknown size.
855 # This blocks until the batch has finished executing.
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
578 xla_context.Exit()
579 else:
--> 580 result = self._call(*args, **kwds)
581
582 if tracing_count == self._get_tracing_count():
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
625 # This is the first call of __call__, so we have to initialize.
626 initializers = []
--> 627 self._initialize(args, kwds, add_initializers_to=initializers)
628 finally:
629 # At this point we know that the initialization is complete (or less
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
504 self._concrete_stateful_fn = (
505 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 506 *args, **kwds))
507
508 def invalid_creator_scope(*unused_args, **unused_kwds):
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2444 args, kwargs = None, None
2445 with self._lock:
-> 2446 graph_function, _, _ = self._maybe_define_function(args, kwargs)
2447 return graph_function
2448
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
2775
2776 self._function_cache.missed.add(call_context_key)
-> 2777 graph_function = self._create_graph_function(args, kwargs)
2778 self._function_cache.primary[cache_key] = graph_function
2779 return graph_function, args, kwargs
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2665 arg_names=arg_names,
2666 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2667 capture_by_value=self._capture_by_value),
2668 self._function_attributes,
2669 # Tell the ConcreteFunction to clean up its graph once it goes out of
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
979 _, original_func = tf_decorator.unwrap(python_func)
980
--> 981 func_outputs = python_func(*func_args, **func_kwargs)
982
983 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
439 # __wrapped__ allows AutoGraph to swap in a converted function. We give
440 # the function a weak reference to itself to avoid a reference cycle.
--> 441 return weak_wrapped_fn().__wrapped__(*args, **kwds)
442 weak_wrapped_fn = weakref.ref(wrapped_fn)
443
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
966 except Exception as e: # pylint:disable=broad-except
967 if hasattr(e, "ag_error_metadata"):
--> 968 raise e.ag_error_metadata.to_exception(e)
969 else:
970 raise
OperatorNotAllowedInGraphError: in user code:
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:571 train_function *
outputs = self.distribute_strategy.run(
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:951 run **
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
return fn(*args, **kwargs)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:533 train_step **
y, y_pred, sample_weight, regularization_losses=self.losses)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/engine/compile_utils.py:205 __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:143 __call__
losses = self.call(y_true, y_pred)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:246 call
return self.fn(y_true, y_pred, **self._fn_kwargs)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:178 compute_loss
if tf.math.reduce_any(labels == -1):
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:778 __bool__
self._disallow_bool_casting()
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:545 _disallow_bool_casting
"using a `tf.Tensor` as a Python `bool`")
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:532 _disallow_when_autograph_enabled
" decorating it directly with @tf.function.".format(task))
OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did not convert this function. Try decorating it directly with @tf.function.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would expect the model to train without errors, thank you for your kind help!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7850/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7850/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7849 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7849/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7849/comments | https://api.github.com/repos/huggingface/transformers/issues/7849/events | https://github.com/huggingface/transformers/issues/7849 | 723,104,327 | MDU6SXNzdWU3MjMxMDQzMjc= | 7,849 | how to save and load fine-tuned model? | {
"login": "wmathor",
"id": 32392878,
"node_id": "MDQ6VXNlcjMyMzkyODc4",
"avatar_url": "https://avatars.githubusercontent.com/u/32392878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wmathor",
"html_url": "https://github.com/wmathor",
"followers_url": "https://api.github.com/users/wmathor/followers",
"following_url": "https://api.github.com/users/wmathor/following{/other_user}",
"gists_url": "https://api.github.com/users/wmathor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wmathor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wmathor/subscriptions",
"organizations_url": "https://api.github.com/users/wmathor/orgs",
"repos_url": "https://api.github.com/users/wmathor/repos",
"events_url": "https://api.github.com/users/wmathor/events{/privacy}",
"received_events_url": "https://api.github.com/users/wmathor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"To save your model, first create a directory in which everything will be saved. In Python, you can do this as follows:\r\n```\r\nimport os\r\nos.makedirs(\"path/to/awesome-name-you-picked\")\r\n```\r\nNext, you can use the `model.save_pretrained(\"path/to/awesome-name-you-picked\")` method. This will save the model, with its weights and configuration, to the directory you specify. Next, you can load it back using `model = .from_pretrained(\"path/to/awesome-name-you-picked\")`.\r\n\r\nSource: https://huggingface.co/transformers/model_sharing.html",
"> To save your model, first create a directory in which everything will be saved. In Python, you can do this as follows:\r\n> \r\n> ```\r\n> import os\r\n> os.makedirs(\"path/to/awesome-name-you-picked\")\r\n> ```\r\n> \r\n> Next, you can use the `model.save_pretrained(\"path/to/awesome-name-you-picked\")` method. This will save the model, with its weights and configuration, to the directory you specify. Next, you can load it back using `model = .from_pretrained(\"path/to/awesome-name-you-picked\")`.\r\n> \r\n> Source: https://huggingface.co/transformers/model_sharing.html\r\n\r\nShould I save the model parameters separately, save the BERT first and then save my own nn.linear. Is this the only way to do the above? Is there an easy way? Thank you for your reply",
"I validate the model as I train it, and save the model with the highest scores on the validation set using `torch.save(model.state_dict(), output_model_file)`. As shown in the figure below\r\n\r\n\r\nThen I trained again and loaded the previously saved model instead of training from scratch, but it didn't work well, which made me feel like it wasn't saved or loaded successfully ?\r\n",
"Hi, I'm also confused about this. Have you solved this probelm? If yes, could you please show me your code of saving and loading model in detail. THX ! :) ",
"> Hi, I'm also confused about this. Have you solved this probelm? If yes, could you please show me your code of saving and loading model in detail. THX ! :)\r\n\r\nare you chinese? if you are, i could reply you by chinese",
"> > Hi, I'm also confused about this. Have you solved this probelm? If yes, could you please show me your code of saving and loading model in detail. THX ! :)\r\n> \r\n> are you chinese? if you are, i could reply you by chinese\r\n\r\n哈哈哈,想请问一下你,该怎么保存模型。",
"我问了一位台湾友人,他跟我说,huggingface的预训练模型也是torch写的,所以直接使用torch的方式正常加载和保存模型就行了\r\n```python\r\nmodel = MyModel(num_classes).to(device)\r\noptimizer = AdamW(model.parameters(), lr=2e-5, weight_decay=1e-2)\r\noutput_model = './models/model_xlnet_mid.pth'\r\n\r\n# save\r\ndef save(model, optimizer):\r\n # save\r\n torch.save({\r\n 'model_state_dict': model.state_dict(),\r\n 'optimizer_state_dict': optimizer.state_dict()\r\n }, output_model)\r\n\r\nsave(model, optimizer)\r\n\r\n# load\r\ncheckpoint = torch.load(output_model, map_location='cpu')\r\nmodel.load_state_dict(checkpoint['model_state_dict'])\r\noptimizer.load_state_dict(checkpoint['optimizer_state_dict'])\r\n```",
"> 我问了一位台湾友人,他跟我说,huggingface的预训练模型也是torch写的,所以直接使用torch的方式正常加载和保存模型就行了\r\n> \r\n> ```python\r\n> model = MyModel(num_classes).to(device)\r\n> optimizer = AdamW(model.parameters(), lr=2e-5, weight_decay=1e-2)\r\n> output_model = './models/model_xlnet_mid.pth'\r\n> \r\n> # save\r\n> def save(model, optimizer):\r\n> # save\r\n> torch.save({\r\n> 'model_state_dict': model.state_dict(),\r\n> 'optimizer_state_dict': optimizer.state_dict()\r\n> }, output_model)\r\n> \r\n> save(model, optimizer)\r\n> \r\n> # load\r\n> checkpoint = torch.load(output_model, map_location='cpu')\r\n> model.load_state_dict(checkpoint['model_state_dict'])\r\n> optimizer.load_state_dict(checkpoint['optimizer_state_dict'])\r\n> ```\r\n\r\n哦哦,好的,谢谢了!",
"> 我问了一位台湾友人,他跟我说,huggingface的预训练模型也是torch写的,所以直接使用torch的方式正常加载和保存模型就行了\r\n> \r\n> ```python\r\n> model = MyModel(num_classes).to(device)\r\n> optimizer = AdamW(model.parameters(), lr=2e-5, weight_decay=1e-2)\r\n> output_model = './models/model_xlnet_mid.pth'\r\n> \r\n> # save\r\n> def save(model, optimizer):\r\n> # save\r\n> torch.save({\r\n> 'model_state_dict': model.state_dict(),\r\n> 'optimizer_state_dict': optimizer.state_dict()\r\n> }, output_model)\r\n> \r\n> save(model, optimizer)\r\n> \r\n> # load\r\n> checkpoint = torch.load(output_model, map_location='cpu')\r\n> model.load_state_dict(checkpoint['model_state_dict'])\r\n> optimizer.load_state_dict(checkpoint['optimizer_state_dict'])\r\n> ```\r\n\r\n马克一下",
"Hi all,\r\n\r\nI have saved a keras fine tuned model on my machine, but I would like to use it in an app to deploy. \r\n\r\nI loaded the model on github, I wondered if I could load it from the directory it is in github? \r\n\r\nThat does not seem to be possible, does anyone know where I could save this model for anyone to use it? \r\nHuggingface provides a hub which is very useful to do that but this is not a huggingface model.\r\n\r\nLet me know if you can help please :) ",
"I know the `huggingface_hub` library provides a utility class called `ModelHubMixin` to save and load any PyTorch model from the hub (see original [tweet](https://twitter.com/julien_c/status/1372613568244371461?s=19)). I wonder whether something similar exists for Keras models?\r\n\r\ncc @julien-c ",
"That would be ideal. But I wonder; if there are no public hubs I can host this keras model on, does this mean that no trained keras models can be publicly deployed on an app?",
"^Tagging @osanseviero and @nateraw on this!",
"Having an easy way to save and load Keras models is in our short-term roadmap and we expect to have updates soon!\r\n\r\n> if there are no public hubs I can host this keras model on, does this mean that no trained keras models can be publicly deployed on an app?\r\n\r\nI'm not sure I fully understand your question. Using Hugging Face Inference API, you can make inference with Keras models and easily share the models with the rest of the community. Note that you can also share the model using the Hub and use other hosting alternatives or even run your model on-device.",
"Thanks @osanseviero for your reply! \r\nWhat i'm wondering is whether i can have my keras model loaded on the huggingface hub (or another) like I have for my BertForSequenceClassification fine tuned model (see the screeshot)?\r\n\r\nThis allows to deploy the model publicly since anyone can load it from any machine. I would like to do the same with my Keras model. Does that make sense? If yes, do you know how? That would be awesome since my model performs greatly! it's for a summariser:) \r\n\r\n\r\n",
"哈喽,可以保存整个模型而不是参数模型吗。",
"> ```python\r\n> model = MyModel(num_classes).to(device)\r\n> optimizer = AdamW(model.parameters(), lr=2e-5, weight_decay=1e-2)\r\n> output_model = './models/model_xlnet_mid.pth'\r\n> \r\n> # save\r\n> def save(model, optimizer):\r\n> # save\r\n> torch.save({\r\n> 'model_state_dict': model.state_dict(),\r\n> 'optimizer_state_dict': optimizer.state_dict()\r\n> }, output_model)\r\n> \r\n> save(model, optimizer)\r\n> \r\n> # load\r\n> checkpoint = torch.load(output_model, map_location='cpu')\r\n> model.load_state_dict(checkpoint['model_state_dict'])\r\n> optimizer.load_state_dict(checkpoint['optimizer_state_dict'])\r\n> ```\r\n\r\nhey, what is output_model parameter?? what should be it's value??"
] | 1,602 | 1,689 | 1,603 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
```python
class MyModel(nn.Module):
def __init__(self, num_classes):
super(MyModel, self).__init__()
self.bert = BertModel.from_pretrained('hfl/chinese-roberta-wwm-ext', return_dict=True).to(device)
self.fc = nn.Linear(768, num_classes, bias=False)
def forward(self, x_input_ids, x_type_ids, attn_mask):
outputs = self.bert(x_input_ids, token_type_ids=x_type_ids, attention_mask=attn_mask)
pred = self.fc(outputs.pooler_output)
return pred
model = MyModel(num_classes).to(device)
# save
# load
```
I have defined my model via huggingface, but I don't know how to save and load the model, hopefully someone can help me out, thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7849/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/7849/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7848 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7848/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7848/comments | https://api.github.com/repos/huggingface/transformers/issues/7848/events | https://github.com/huggingface/transformers/issues/7848 | 723,027,901 | MDU6SXNzdWU3MjMwMjc5MDE= | 7,848 | RuntimeError with DistributedDataParallel | {
"login": "ghpu",
"id": 2990010,
"node_id": "MDQ6VXNlcjI5OTAwMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2990010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghpu",
"html_url": "https://github.com/ghpu",
"followers_url": "https://api.github.com/users/ghpu/followers",
"following_url": "https://api.github.com/users/ghpu/following{/other_user}",
"gists_url": "https://api.github.com/users/ghpu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghpu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghpu/subscriptions",
"organizations_url": "https://api.github.com/users/ghpu/orgs",
"repos_url": "https://api.github.com/users/ghpu/repos",
"events_url": "https://api.github.com/users/ghpu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghpu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `BertModel` runs absolutely fine in `DistributedDataParallel` (just retried) so the issue probably comes from the rest of your code and not the library. I can't see the code for the `LosslessTripletLoss`, maybe the issues comes from there.\r\n\r\nIn any case, to debug I would follow the instruction given in the error message:\r\n```\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 200]] is at version 9; expected version 7 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).\r\n```",
"Well, thanks for your tips. \r\n\r\nBut when using set_detect_anomaly, my problem seems to be located in EmbeddingBackward :\r\n\r\n```\r\n[W python_anomaly_mode.cpp:60] Warning: Error detected in EmbeddingBackward. Traceback of forward call that caused the error:\r\n File \"<string>\", line 1, in <module>\r\n File \"/lib/python3.7/multiprocessing/spawn.py\", line 105, in spawn_main\r\n exitcode = _main(fd)\r\n File \"/lib/python3.7/multiprocessing/spawn.py\", line 118, in _main\r\n return self._bootstrap()\r\n File \"/lib/python3.7/multiprocessing/process.py\", line 297, in _bootstrap\r\n self.run()\r\n File \"/lib/python3.7/multiprocessing/process.py\", line 99, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 20, in _wrap\r\n fn(i, *args)\r\n File \"my_program.py\", line 525, in demo_basic\r\n f4n.train()\r\n File \"my_program.py\", line 213, in train\r\n self.epoch(epoch)\r\n File \"my_program.py\", line 249, in epoch\r\n pos_vec = self.tokens2vec(pos, attention_mask=pmask)\r\n File \"my_program.py\", line 444, in tokens2vec\r\n hidden_states = self.model(tokens_tensor, attention_mask=attention_mask)[0]\r\n File \"/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/lib/python3.7/site-packages/torch/nn/parallel/distributed.py\", line 511, in forward\r\n output = self.module(*inputs[0], **kwargs[0])\r\n File \"/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 831, in forward\r\n input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds\r\n File \"/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 198, in forward\r\n position_embeddings = self.position_embeddings(position_ids)\r\n File \"/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/lib/python3.7/site-packages/torch/nn/modules/sparse.py\", line 126, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/lib/python3.7/site-packages/torch/nn/functional.py\", line 1814, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n```\r\n\r\n\r\n\r\nThe problem occurs when calling BertModel the second time in the epoch, when calculating \"pos\".",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, have you solved this problem? I also met this issue.",
"No, sorry, I finally gave up.",
"Same Error. Any one has solved the problem?",
"I've solved this problem. Mine was because when I build the input sequences as follows:\r\n`decoder_inputs = labels.narrow(1, 0, seqlen - 1).clone()`\r\n`decoder_masks = label_masks.narrow(1, 0, seqlen - 1).clone()`\r\n`decoder_labels = labels.narrow(1, 1, seqlen-1).clone()`\r\n`decoder_label_masks = label_masks.narrow(1, 1, seqlen - 1).clone()`\r\n`decoder_labels[~decoder_label_masks.bool()] = -100`\r\n\r\nI forgot to add `.clone()` at the beginning. So when I change some of the `decoder_labels` to be -100, some of the elements in `decoder_inputs` were also changed, which caused the error.\r\n\r\nHope this information could help you a bit!",
"> I've solved this problem. Mine was because when I build the input sequences as follows:\r\n> `decoder_inputs = labels.narrow(1, 0, seqlen - 1).clone()`\r\n> `decoder_masks = label_masks.narrow(1, 0, seqlen - 1).clone()`\r\n> `decoder_labels = labels.narrow(1, 1, seqlen-1).clone()`\r\n> `decoder_label_masks = label_masks.narrow(1, 1, seqlen - 1).clone()`\r\n> `decoder_labels[~decoder_label_masks.bool()] = -100`\r\n> \r\n> I forgot to add `.clone()` at the beginning. So when I change some of the `decoder_labels` to be -100, some of the elements in `decoder_inputs` were also changed, which caused the error.\r\n> \r\n> Hope this information could help you a bit!\r\n\r\nThanks a lot!",
"I met exact same issue with triplet loss using transformer 4.5.1 in NCCL distributed training setting.\r\nI found this problem may occur to using multiple forward Bert output and compute loss at once.\r\nI solved this by concat inputs ahead and chunk it to calculate loss:\r\n```python\r\ninput_ids = torch.cat([anchor_input_ids, pos_input_ids, neg_input_ids], dim=0)\r\nattention_mask = torch.cat([anchor_attention_mask, pos_attention_mask, neg_attention_mask], dim=0)\r\ntoken_type_ids = torch.cat([anchor_token_type_ids, pos_token_type_ids, neg_token_type_ids], dim=0)\r\n\r\nemb = model(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)\r\nanchor_emb, pos_emb, neg_emb = emb.pooled_output.chunk(3, dim=0)\r\nloss = criterion(anchor_emb, anchor_emb, anchor_emb)\r\n```\r\n",
"My problem was solved after adding `broadcast_buffers=False` to `torch.nn.parallel.DistributedDataParallel`\r\nFollowing https://github.com/ashkamath/mdetr/issues/16#issuecomment-878388469"
] | 1,602 | 1,686 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Ubuntu 18.04
- Python version: 3.7.7.final.0
- PyTorch version (GPU?): 1.6.0 gpu
- Tensorflow version (GPU?): X
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed set-up
- Conda : 4.8.3
- cuda =11.0
### Who can help
@LysandreJik @sgugger
## Information
Model I am using (Bert, XLNet ...): bert mulitilingual cased
The problem arises when using:
* [x] my own modified scripts : Triplet Loss applied to sentence similarity, to practice writing efficient distributed learning code.
The tasks I am working on is:
* [x] my own task or dataset: Triplet Loss applied to sentence similarity, to practice writing efficient distributed learning code.
## To reproduce
I have not been able to generte a minimal version yet.
Stack trace
```
Traceback (most recent call last):
[...]
File "my_program.py", line 246, in epoch
loss.backward()
File "conda/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 200]] is at version 9; expected version 7 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
```
My model :
```python
def __init__(self, trainfile, devfile, outdir, rank=0):
# default config
self.max_sent_len = 200
self.batchsize = 10
self.lr = 2e-5
self.accumulated_batches = 6
self.max_epochs = 10
self.seed = 42
self.embs = "bert-base-multilingual-cased"
self.tokenizer, self.model = self.transformers(self.embs)
self.tripletloss = lossless_triplet_loss.LosslessTripletLoss()
print("use %s gpus" % torch.cuda.device_count())
self.model = self.model.to(rank)
self.model = torch.nn.parallel.DistributedDataParallel(self.model, device_ids=[rank], output_device=rank, find_unused_para meters=True)
self.rank = rank
half = False
self.model = self.model.to(self.rank)
self.optimizer = optim.Adam(filter(lambda param: param.requires_grad, self.model.parameters()), lr=self.lr)
self.traindataset = data.TripleDataset2(trainfile, self.tokenizer, self.max_sent_len)
self.devdataset = data.TripleDataset2(devfile, self.tokenizer, self.max_sent_len)
self.traindataloader = DataLoader(self.traindataset,
collate_fn=self.collate_fn,
batch_size = self.batchsize,
num_workers = 0)
self.devdataloader = DataLoader(self.devdataset,
collate_fn=self.collate_fn,
batch_size = self.batchsize,
num_workers = 0)
def train(self, epoch):
self.model.train()
iterations = int(math.ceil(len(self.traindataset)/self.batchsize))
iterator = tqdm.tqdm(enumerate(self.traindataloader),
ncols=100,
ascii=True,
desc="epoch %d" % (epoch),
mininterval=1.0,
total = iterations)
lastloss = None
for batch_idx, anchor_pos_neg in iterator:
(anchor, amask), (pos, pmask), (neg, nmask) = anchor_pos_neg
anchor = anchor.to(self.rank)
pos = pos.to(self.rank)
neg = neg.to(self.rank)
amask = amask.to(self.rank)
pmask = pmask.to(self.rank)
nmask = nmask.to(self.rank)
anchor_vec = self.tokens2vec(anchor, attention_mask=amask)
pos_vec = self.tokens2vec(pos, attention_mask=pmask)
neg_vec = self.tokens2vec(neg, attention_mask=nmask)
loss, distp, distn = self.calc_triplet_loss(anchor_vec, pos_vec, neg_vec)
loss.backward()
if (batch_idx+1) % self.accumulated_batches == 0:
self.optimizer.step()
self.optimizer.zero_grad()
def tokens2vec(self, tokens_tensor, attention_mask=None):
hidden_states = self.model(tokens_tensor, attention_mask=attention_mask)[0]
token_vecs = torch.squeeze(hidden_states, dim=0)
a = torch.sum(torch.mul(token_vecs, torch.unsqueeze(attention_mask, 2)), dim=1)
m = torch.unsqueeze(torch.sum(attention_mask, dim=1), 1)
sentence_embedding = torch.sigmoid(torch.div(a, m))
return sentence_embedding
def transformers(self, name):
if name.startswith("bert"):
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained(name, do_lower_case=False)
model = BertModel.from_pretrained(name)
else:
tokenizer = None
model = None
print("ERROR: invalid embeddings name <%s>" % name, file=sys.stderr)
return tokenizer, model
```
The error happens with batch_idx = 0.
200 is my maximum sequence length. I only use Longtensor as input to BertModel (for anchor, pos and neg).
What I find strange is that I am sending samples in batches of size 10, so I am wondering if my problem is really with the embedding layer, or rather caused by my custom collate_fn :
```python
def collate_fn(self, batch):
anchor = []
pos = []
neg = []
anchor_mask = []
pos_mask = []
neg_mask = []
for a,p,n in batch:
anchor.append(a[0])
pos.append(p[0])
neg.append(n[0])
anchor_mask.append(a[1])
pos_mask.append(p[1])
neg_mask.append(n[1])
return (torch.stack(anchor, dim=0),torch.stack(anchor_mask, dim=0)), \
(torch.stack(pos, dim=0),torch.stack(pos_mask, dim=0)), \
(torch.stack(neg, dim=0),torch.stack(neg_mask, dim=0))
```
My model is working well with only one GPU, or when using DataParallel. My issue arises only with DistributedDataParallel.
## Expected behavior
A running training. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7848/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7847 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7847/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7847/comments | https://api.github.com/repos/huggingface/transformers/issues/7847/events | https://github.com/huggingface/transformers/issues/7847 | 722,993,861 | MDU6SXNzdWU3MjI5OTM4NjE= | 7,847 | Where do the models go in colab? | {
"login": "ShivanshuPurohit",
"id": 42869065,
"node_id": "MDQ6VXNlcjQyODY5MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/42869065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShivanshuPurohit",
"html_url": "https://github.com/ShivanshuPurohit",
"followers_url": "https://api.github.com/users/ShivanshuPurohit/followers",
"following_url": "https://api.github.com/users/ShivanshuPurohit/following{/other_user}",
"gists_url": "https://api.github.com/users/ShivanshuPurohit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShivanshuPurohit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShivanshuPurohit/subscriptions",
"organizations_url": "https://api.github.com/users/ShivanshuPurohit/orgs",
"repos_url": "https://api.github.com/users/ShivanshuPurohit/repos",
"events_url": "https://api.github.com/users/ShivanshuPurohit/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShivanshuPurohit/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | ## Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
documentation: @sgugger
-->
## Information
Model I am using (Bert):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [.] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [.] my own task or dataset: (give details below)
Semeval 2017 task 10 keyword boundary detection
## To reproduce
Steps to reproduce the behavior:
1. Run `train.py` from [this repo](https://github.com/pranav-ust/BERT-keyphrase-extraction)
2. Load the model as `model = BertForTokenClassification.from_pretrained('bert-base-uncased', num_labels=len(params.tag2idx))`
3. Run `eval.py` as `python evaluate.py --data_dir data/task1/ --bert_model_dir model/ --model_dir experiments/base_model --restore_file best`
This gives the following error:
`Traceback (most recent call last):
File "evaluate.py", line 117, in <module>
config = BertConfig.from_json_file(config_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 436, in from_json_file
config_dict = cls._dict_from_json_file(json_file)
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 441, in _dict_from_json_file
with open(json_file, "r", encoding="utf-8") as reader:
FileNotFoundError: [Errno 2] No such file or directory: 'model/bert_config.json'
2020-10-16 07:41:50.582535: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Loading the dataset...
done.
Traceback (most recent call last):
File "evaluate.py", line 117, in <module>
config = BertConfig.from_json_file(config_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 436, in from_json_file
config_dict = cls._dict_from_json_file(json_file)
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 441, in _dict_from_json_file
with open(json_file, "r", encoding="utf-8") as reader:
FileNotFoundError: [Errno 2] No such file or directory: 'model/bert_config.json'`
Which I believe is caused by me passing `model/` as `--bert-model-dir` argument.
Where do the downloaded models go on colab? Or more importantly, can I pass a path argument to download them in a specific folder?
### Edit
If I download a model as `!wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz` and extract it in the model/ folder, then `python eval.py` gives the following error:
`Loading the dataset...
- done.
Traceback (most recent call last):
File "evaluate.py", line 118, in <module>
model = BertForTokenClassification(config, num_labels=len(params.tag2idx))
TypeError: __init__() got an unexpected keyword argument 'num_labels'` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7847/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7846 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7846/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7846/comments | https://api.github.com/repos/huggingface/transformers/issues/7846/events | https://github.com/huggingface/transformers/issues/7846 | 722,963,649 | MDU6SXNzdWU3MjI5NjM2NDk= | 7,846 | BartTokenizer prepare_seq2seq_batch() does not return decoder_input_ids, decoder_attention_mask as per document after passing tgt_texts | {
"login": "MojammelHossain",
"id": 47693507,
"node_id": "MDQ6VXNlcjQ3NjkzNTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/47693507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MojammelHossain",
"html_url": "https://github.com/MojammelHossain",
"followers_url": "https://api.github.com/users/MojammelHossain/followers",
"following_url": "https://api.github.com/users/MojammelHossain/following{/other_user}",
"gists_url": "https://api.github.com/users/MojammelHossain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MojammelHossain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MojammelHossain/subscriptions",
"organizations_url": "https://api.github.com/users/MojammelHossain/orgs",
"repos_url": "https://api.github.com/users/MojammelHossain/repos",
"events_url": "https://api.github.com/users/MojammelHossain/events{/privacy}",
"received_events_url": "https://api.github.com/users/MojammelHossain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am facing the same issue and I noticed that the method indeed returns the `[\"input_ids\"]` of `tgt_texts` as labels. I think I could easily fix this to return both `input_ids` and `attention_mask` of `tgt_texts` (as `decoder_...`) but I noticed the same pattern in other seq2seq models, like T5. I am not sure what's the proper solution but if it is similar to what I suggest, than I'd be happy to make a pull request.\r\n\r\n@LysandreJik I'd be happy to hear an opinion and start working on this.",
"I think https://github.com/huggingface/transformers/pull/6654/ and https://github.com/huggingface/transformers/issues/6624 are related - the PR changed `decoder_input_ids` to `labels`. Probably the documentation should be changed but I have to get more familiar with the respective issue and PR to be sure.",
"Thanks for the feedback @freespirit. Hopefully, they will update the documentation as it is a little bit confusing. But what I found that the modeling_bart.py file already handles the problem. _prepare_bart_decoder_inputs() and shift_tokens_right() solving that if I am not wrong. But I think I have to go deeper for understanding which I am trying to.\r\n\r\n\r\n",
"Pinging @sshleifer for advice",
"@MojammelHossain is correct, the docs are wrong. \r\nThe correct usage is to allow `_prepare_bart_decoder_inputs` to make `decoder_input_ids` and `decoder_attention_mask` for you. For training, you only need to pass the 3 keys returned by `prepare_seq2seq_batch`.\r\n\r\n"
] | 1,602 | 1,606 | 1,606 | NONE | null | I am trying to train a seq2seq model using BartModel. As per BartTokenizer documentation if I pass tgt_texts then it should return decoder_attention_mask and decoder_input_ids please check the attachment for clarity.

But I am only getting input_ids, attention_mask and labels.

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7846/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7845 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7845/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7845/comments | https://api.github.com/repos/huggingface/transformers/issues/7845/events | https://github.com/huggingface/transformers/pull/7845 | 722,914,249 | MDExOlB1bGxSZXF1ZXN0NTA0NTk0MTA1 | 7,845 | [seq2seq testing] improve readability | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | improve readability
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7845/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7845",
"html_url": "https://github.com/huggingface/transformers/pull/7845",
"diff_url": "https://github.com/huggingface/transformers/pull/7845.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7845.patch",
"merged_at": 1602853529000
} |
https://api.github.com/repos/huggingface/transformers/issues/7844 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7844/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7844/comments | https://api.github.com/repos/huggingface/transformers/issues/7844/events | https://github.com/huggingface/transformers/issues/7844 | 722,905,618 | MDU6SXNzdWU3MjI5MDU2MTg= | 7,844 | [seq2seq distributed] child process stuck on error | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"OK, this is the problem with PL if anything, as the command:\r\n```\r\nPYTHONPATH=\"src\" /home/stas/anaconda3/envs/main-38/bin/python /mnt/nvme1/code/huggingface/transformers-distr-train-test/examples/seq2seq/distillation.py --supervise_forward --normalize_hidden --label_smoothing=0.0 --eval_beams=1 --val_metric=loss --save_top_k=1 --adafactor --early_stopping_patience=-1 --logger_name=default --length_penalty=0.5 --cache_dir= --task=summarization --num_workers=2 --alpha_hid=0 --freeze_embeds --sortish_sampler --student_decoder_layers=1 --val_check_interval=0.5 --output_dir=/tmp/tmpqajqhzwo --no_teacher --fp16_opt_level=O1 --gpus=2 --max_grad_norm=1.0 --do_train --do_predict --accumulate_grad_batches=1 --seed=42 --model_name_or_path=sshleifer/tinier_bart --config_name= --tokenizer_name=facebook/bart-large --learning_rate=0.3 --lr_scheduler=linear --weight_decay=0.0 --adam_epsilon=1e-08 --warmup_steps=0 --max_epochs=2 --train_batch_size=1 --eval_batch_size=2 --max_source_length=12 --max_target_length=12 --val_max_target_length=12 --test_max_target_length=12 --n_train=-1 --n_val=-1 --n_test=-1 --student_encoder_layers=1 --freeze_encoder --data_dir=/tmp/tmpo_9t6k7e --alpha_mlm=0.2 --alpha_ce=0.8 --teacher=sshleifer/bart-tiny-random\r\n```\r\nhangs on its own, so this is definitely not an issue of the newly added distributed support for `pytest`. \r\n\r\nI think that when one of the workers fails (e.g. asserts on output directory already exists) and the coordinating process gets stuck waiting for the worker to send something back."
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null |
Filing an issue for myself to help move along https://github.com/huggingface/transformers/pull/7281
```
pytest -sv examples/seq2seq/test_seq2seq_examples_multi_gpu.py
```
gets stuck if one of the child subprocesses throws an error.
Remove `skip_output_dir_check=True,` to reproduce the problem.
It could be related to realtime async read/write pipes getting into a deadlock - see the XXX warning in that test, need to first try whether the problem goes away if I switch to non-real time pipes.
I will get back to it later.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7844/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7843 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7843/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7843/comments | https://api.github.com/repos/huggingface/transformers/issues/7843/events | https://github.com/huggingface/transformers/pull/7843 | 722,880,286 | MDExOlB1bGxSZXF1ZXN0NTA0NTY0MjE1 | 7,843 | [seq2seq] get_git_info fails gracefully | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for returning the same keys!"
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | It looks like gitpython is broken when one tries to use it from `git checkout hash`, see: https://github.com/gitpython-developers/GitPython/issues/633
While debugging/bisecting we need to be able to step through different commits and for the code to still work.
Currently if I do:
```
git checkout fb94b8f1e16eb
```
and run an examples test, like `examples/seq2seq/test_seq2seq_examples_multi_gpu.py` (soon to be merged) it fails with:
```
ERR: File "./examples/seq2seq/distillation.py", line 504, in <module>
ERR: distill_main(args)
ERR: File "./examples/seq2seq/distillation.py", line 494, in distill_main
ERR: model = create_module(args)
ERR: File "./examples/seq2seq/distillation.py", line 411, in create_module
ERR: model = module_cls(args)
ERR: File "/mnt/nvme1/code/huggingface/transformers-multigpu/examples/seq2seq/finetune.py", line 58, in __init__
ERR: save_git_info(self.hparams.output_dir)
ERR: File "/mnt/nvme1/code/huggingface/transformers-multigpu/examples/seq2seq/utils.py", line 355, in save_git_info
ERR: repo_infos = get_git_info()
ERR: File "/mnt/nvme1/code/huggingface/transformers-multigpu/examples/seq2seq/utils.py", line 374, in get_git_info
ERR: "repo_branch": str(repo.active_branch),
ERR: File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/git/repo/base.py", line 705, in active_branch
ERR: return self.head.reference
ERR: File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/git/refs/symbolic.py", line 272, in _get_reference
ERR: raise TypeError("%s is a detached symbolic reference as it points to %r" % (self, sha))
ERR: TypeError: HEAD is a detached symbolic reference as it points to 'fb94b8f1e16eb21d166174f52e3e49e669ef0ac4'
```
The odd leading `ERR` string is just a replay of stderr from the sub-process - please ignore this nuance.
This PR provides a workaround.
@sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7843/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7843",
"html_url": "https://github.com/huggingface/transformers/pull/7843",
"diff_url": "https://github.com/huggingface/transformers/pull/7843.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7843.patch",
"merged_at": 1602822164000
} |
https://api.github.com/repos/huggingface/transformers/issues/7842 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7842/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7842/comments | https://api.github.com/repos/huggingface/transformers/issues/7842/events | https://github.com/huggingface/transformers/pull/7842 | 722,832,568 | MDExOlB1bGxSZXF1ZXN0NTA0NTE5NDkx | 7,842 | [testing] disable FutureWarning in examples tests | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | same as tests/conftest.py, we can't resolve those warning, so turn the noise off.
same as PR: https://github.com/huggingface/transformers/pull/7079 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7842/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7842",
"html_url": "https://github.com/huggingface/transformers/pull/7842",
"diff_url": "https://github.com/huggingface/transformers/pull/7842.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7842.patch",
"merged_at": 1602833739000
} |
https://api.github.com/repos/huggingface/transformers/issues/7841 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7841/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7841/comments | https://api.github.com/repos/huggingface/transformers/issues/7841/events | https://github.com/huggingface/transformers/pull/7841 | 722,740,894 | MDExOlB1bGxSZXF1ZXN0NTA0NDQzNTM4 | 7,841 | [DOC] Typo and fix the input of labels to `cross_entropy` | {
"login": "katarinaslama",
"id": 5973939,
"node_id": "MDQ6VXNlcjU5NzM5Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5973939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katarinaslama",
"html_url": "https://github.com/katarinaslama",
"followers_url": "https://api.github.com/users/katarinaslama/followers",
"following_url": "https://api.github.com/users/katarinaslama/following{/other_user}",
"gists_url": "https://api.github.com/users/katarinaslama/gists{/gist_id}",
"starred_url": "https://api.github.com/users/katarinaslama/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katarinaslama/subscriptions",
"organizations_url": "https://api.github.com/users/katarinaslama/orgs",
"repos_url": "https://api.github.com/users/katarinaslama/repos",
"events_url": "https://api.github.com/users/katarinaslama/events{/privacy}",
"received_events_url": "https://api.github.com/users/katarinaslama/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | # What does this PR do?
The current version caused some errors. The proposed changes fixed it for me. Hope this is helpful!
- [ x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7841/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7841",
"html_url": "https://github.com/huggingface/transformers/pull/7841",
"diff_url": "https://github.com/huggingface/transformers/pull/7841.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7841.patch",
"merged_at": 1602804992000
} |
https://api.github.com/repos/huggingface/transformers/issues/7840 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7840/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7840/comments | https://api.github.com/repos/huggingface/transformers/issues/7840/events | https://github.com/huggingface/transformers/issues/7840 | 722,740,705 | MDU6SXNzdWU3MjI3NDA3MDU= | 7,840 | Token Type IDs returned from the tokenizer for T5 don't work with special tokens | {
"login": "dirkgr",
"id": 920638,
"node_id": "MDQ6VXNlcjkyMDYzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/920638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dirkgr",
"html_url": "https://github.com/dirkgr",
"followers_url": "https://api.github.com/users/dirkgr/followers",
"following_url": "https://api.github.com/users/dirkgr/following{/other_user}",
"gists_url": "https://api.github.com/users/dirkgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dirkgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dirkgr/subscriptions",
"organizations_url": "https://api.github.com/users/dirkgr/orgs",
"repos_url": "https://api.github.com/users/dirkgr/repos",
"events_url": "https://api.github.com/users/dirkgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dirkgr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"The PR linked to this issue should fix it. However note that T5 does not use `token_type_ids`, so that we will simply return all 0."
] | 1,602 | 1,605 | 1,605 | CONTRIBUTOR | null | With `transformers-3.3.1`:
```
import transformers
t = transformers.AutoTokenizer.from_pretrained('t5-small')
t.encode_plus(["a"], ["b"], add_special_tokens=True, return_token_type_ids=True)
```
This results in
```
{'input_ids': [9, 1, 115, 1], 'token_type_ids': [0, 1], 'attention_mask': [1, 1, 1, 1]}
```
As you can see, the token type IDs don't align with the other outputs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7840/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7840/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7839 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7839/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7839/comments | https://api.github.com/repos/huggingface/transformers/issues/7839/events | https://github.com/huggingface/transformers/pull/7839 | 722,688,822 | MDExOlB1bGxSZXF1ZXN0NTA0Mzk4MzM3 | 7,839 | Small fixes to HP search | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | COLLABORATOR | null | # What does this PR do?
HP search has two small bugs in `Trainer` right now:
- it pops things form the metrics dict, which are then improperly logged
- it doesn't pop out the `total_flos`, so the hp search tries to maximize the flos (which I guess is @TevenLeScao master plan to conquer the universe ;-) ) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7839/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7839/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7839",
"html_url": "https://github.com/huggingface/transformers/pull/7839",
"diff_url": "https://github.com/huggingface/transformers/pull/7839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7839.patch",
"merged_at": 1602833025000
} |
https://api.github.com/repos/huggingface/transformers/issues/7838 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7838/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7838/comments | https://api.github.com/repos/huggingface/transformers/issues/7838/events | https://github.com/huggingface/transformers/issues/7838 | 722,660,122 | MDU6SXNzdWU3MjI2NjAxMjI= | 7,838 | State of ONNX | {
"login": "ankane",
"id": 220358,
"node_id": "MDQ6VXNlcjIyMDM1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/220358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankane",
"html_url": "https://github.com/ankane",
"followers_url": "https://api.github.com/users/ankane/followers",
"following_url": "https://api.github.com/users/ankane/following{/other_user}",
"gists_url": "https://api.github.com/users/ankane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankane/subscriptions",
"organizations_url": "https://api.github.com/users/ankane/orgs",
"repos_url": "https://api.github.com/users/ankane/repos",
"events_url": "https://api.github.com/users/ankane/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankane/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey, I tried exporting T5 for summarization but get the below error:\r\n**You have to specify either decoder_input_ids or decoder_inputs_embeds**\r\n\r\nI get a similar error for translation pipeline as well. Any workarounds available for this?\r\n@patrickvonplaten @sshleifer ",
"Hey @amanpreet692 - you need to provide both `input_ids` and `decoder_input_ids` for `EncoderDecoderModels`. ",
"Hey @patrickvonplaten yep I get that, but the code implementation is such that we don't pass in the sample inputs for ONNX, the sample tokens are passed directly from within Pytorch onnx.export code I think which are consumed by the encoder and decoder inputs are empty.\r\nI used https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb from @mfuntowicz as a reference with the additional parameter of 'translation' as pipeline.\r\nPlease let me know if there is an immediate solution else I am gonna look into this next week :)\r\nThanks!",
"Hey @amanpreet692 - sorry this is not really my area of expertise here... @mfuntowicz - could you take a look? ",
"Hey @amanpreet692 are you able to resolve this error while exporting t5 **You have to specify either decoder_input_ids or decoder_inputs_embeds**?",
"Was this ever resolved @amanpreet692 @dharm033075 @mfuntowicz? I am having the same issue trying to export t5.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,602 | 1,619 | 1,619 | CONTRIBUTOR | null | Hi, love the work that's going on with ONNX. I wanted to share the current state of ONNX support in case others were wondering about it (sorry for another table).
Pipeline | Supported
--- | ---
feature-extraction | ✓
sentiment-analysis | ✓
ner | ✓
question-answering | ✓
fill-mask | ✓
text-generation | ✓
translation | Broken https://github.com/huggingface/transformers/issues/5948#issuecomment-701699251
summarization |
zero-shot-classification |
conversational |
text2text-generation |
I was able to export models for both summarization and zero-shot-classification, but they both error without a specific token length due to a reshape inside the ONNX model (code in https://github.com/huggingface/transformers/issues/7404#issuecomment-703966076). If you have any ideas for how to prevent this, I'm happy to try and put together a PR.
---
A note for Mac Catalina users: exporting models may error with:
```text
[libprotobuf ERROR google/protobuf/descriptor_database.cc:394] Invalid file descriptor data passed to EncodedDescriptorDatabase::Add().
[libprotobuf FATAL google/protobuf/descriptor.cc:1356] CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
libc++abi.dylib: terminating with uncaught exception of type google::protobuf::FatalException: CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
```
Use an older version of `protobuf` to avoid this (https://github.com/onnx/onnx/issues/2940#issuecomment-669979419):
```sh
pip3 uninstall protobuf
pip3 install protobuf==3.11.3
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7838/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7838/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7837/comments | https://api.github.com/repos/huggingface/transformers/issues/7837/events | https://github.com/huggingface/transformers/pull/7837 | 722,634,080 | MDExOlB1bGxSZXF1ZXN0NTA0MzUyODcx | 7,837 | [testing] fix/hide warnings | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | in `test_trainer_callback.py`:
- fixes PytestCollectionWarning - pytest doesn't like non-unittest classes starting with `Test`
- hides scatter_gather warnings - as it was suggested they are just so
@sgugger
Fixes: https://github.com/huggingface/transformers/issues/7832
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7837/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7837",
"html_url": "https://github.com/huggingface/transformers/pull/7837",
"diff_url": "https://github.com/huggingface/transformers/pull/7837.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7837.patch",
"merged_at": 1602832791000
} |
https://api.github.com/repos/huggingface/transformers/issues/7836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7836/comments | https://api.github.com/repos/huggingface/transformers/issues/7836/events | https://github.com/huggingface/transformers/pull/7836 | 722,617,739 | MDExOlB1bGxSZXF1ZXN0NTA0MzM4Mjk4 | 7,836 | model card for arabic-ner model | {
"login": "hatmimoha",
"id": 22476140,
"node_id": "MDQ6VXNlcjIyNDc2MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22476140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hatmimoha",
"html_url": "https://github.com/hatmimoha",
"followers_url": "https://api.github.com/users/hatmimoha/followers",
"following_url": "https://api.github.com/users/hatmimoha/following{/other_user}",
"gists_url": "https://api.github.com/users/hatmimoha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hatmimoha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hatmimoha/subscriptions",
"organizations_url": "https://api.github.com/users/hatmimoha/orgs",
"repos_url": "https://api.github.com/users/hatmimoha/repos",
"events_url": "https://api.github.com/users/hatmimoha/events{/privacy}",
"received_events_url": "https://api.github.com/users/hatmimoha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for sharing!"
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | README file for the Arabic NER model
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7836/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7836",
"html_url": "https://github.com/huggingface/transformers/pull/7836",
"diff_url": "https://github.com/huggingface/transformers/pull/7836.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7836.patch",
"merged_at": 1603281761000
} |
https://api.github.com/repos/huggingface/transformers/issues/7835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7835/comments | https://api.github.com/repos/huggingface/transformers/issues/7835/events | https://github.com/huggingface/transformers/pull/7835 | 722,611,877 | MDExOlB1bGxSZXF1ZXN0NTA0MzMyOTU1 | 7,835 | [cleanup] assign todos, faster bart-cnn test | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | I went through each TODO(SS), and either did it, or made a self-assigned github issue with it.
I also deleted some commented out code blocks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7835/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7835",
"html_url": "https://github.com/huggingface/transformers/pull/7835",
"diff_url": "https://github.com/huggingface/transformers/pull/7835.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7835.patch",
"merged_at": 1602832279000
} |
https://api.github.com/repos/huggingface/transformers/issues/7834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7834/comments | https://api.github.com/repos/huggingface/transformers/issues/7834/events | https://github.com/huggingface/transformers/pull/7834 | 722,601,454 | MDExOlB1bGxSZXF1ZXN0NTA0MzIzNDE3 | 7,834 | [utils/check_copies.py] fix DeprecationWarning | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,602 | 1,603 | 1,602 | CONTRIBUTOR | null | in `tests/test_utils_check_copies.py` I was getting intermittently:
```
utils/check_copies.py:52
/mnt/nvme1/code/transformers-comet/utils/check_copies.py:52: DeprecationWarning: invalid escape sequence \s
while line_index < len(lines) and re.search(f"^{indent}(class|def)\s+{name}", lines[line_index]) is None:
```
So this should fix it. Not sure why it wasn't showing up all the time.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7834/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7834",
"html_url": "https://github.com/huggingface/transformers/pull/7834",
"diff_url": "https://github.com/huggingface/transformers/pull/7834.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7834.patch",
"merged_at": 1602793270000
} |
https://api.github.com/repos/huggingface/transformers/issues/7833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7833/comments | https://api.github.com/repos/huggingface/transformers/issues/7833/events | https://github.com/huggingface/transformers/issues/7833 | 722,600,366 | MDU6SXNzdWU3MjI2MDAzNjY= | 7,833 | [s2s trainer] tests fail on multi-gpu machine | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
},
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"@stas00 would you be interested in taking a look at this, possibly reusing the fix in https://github.com/huggingface/transformers/pull/7281 ?\r\nIf that doesn't work we can hack it like `tests/test_trainer.py`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/a1d1b332d07a40177ae1959609ab70dab34018b8/tests/test_trainer.py#L245\r\n",
"cc @patil-suraj ",
"Yes, I will work on it today, Sam.",
"the other temp fix option is to use `@require_non_multigpu`",
"This is not the test's issue, but the script's one - this fails with the same error. \r\n```\r\npython examples/seq2seq/finetune_trainer.py --model_name_or_path sshleifer/tiny-mbart --data_dir examples/seq2seq/test_data/wmt_en_ro --output_dir /tmp/test_outputsarhj9od --overwrite_output_dir --n_train 8 --n_val 8 --max_source_length 12 --max_target_length 12 --val_max_target_length 12 --do_train --do_eval --do_predict --num_train_epochs 1 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --learning_rate 3e-4 --warmup_steps 8 --evaluate_during_training --predict_with_generate --logging_steps 0 --save_steps 1 --eval_steps 1 --sortish_sampler --label_smoothing 0.1 --adafactor --task translation --tgt_lang ro_RO --src_lang en_XX\r\n```\r\nI just dumped the args the test was invoking.\r\n\r\n`AssertionError: Default process group is not initialized` means that the distributed setup is not done. \r\n\r\nI will look more into it tomorrow morning. \r\n\r\nOn the other hand - if we sort it out - perhaps we could do the same for distributed eval!? It'd be much much better to delegate to PL all that forking, etc.\r\n\r\n> If that doesn't work we can hack it like tests/test_trainer.py: line 245\r\n\r\nCan you please clarify how do you think it could help? that line of code you quoted does nothing - it's just used for testing and it'll result in `n_gpu=2` anyway. Perhaps you meant somewhere else in that file?",
"You need to launch with\r\n\r\n```bash\r\npython -m torch.distributed.launch --nproc_per_node=2 finetune_trainer.py\r\n```\r\n\r\ncaught me up as well.",
"In which case, yes, this would be 100% the same as https://github.com/huggingface/transformers/pull/7281 - let's finish it first, then refactor all that new code and use it here.\r\n\r\nuntil then you can use `@require_non_multigpu` so that it doesn't interfere.",
"I thought PL had a way of handling distributed internally w/o the user needing to call `-m torch.distributed.launch` - is it not working or I misread it?",
"These tests don't use PL."
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | #### Command
```bash
RUN_SLOW=1 USE_CUDA=1 pytest examples/seq2seq/test_finetune_trainer.py
```
#### Traceback
```python
=========================================================== test session starts ===========================================================
platform linux -- Python 3.7.4, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /home/shleifer/transformers_fork, inifile: pytest.ini
plugins: forked-1.1.3, hydra-core-1.0.0, xdist-1.31.0, requests-mock-1.8.0
collected 2 items
examples/seq2seq/test_finetune_trainer.py /home/shleifer/transformers_fork/src/transformers/training_args.py:339: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
F/home/shleifer/transformers_fork/src/transformers/training_args.py:339: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
F
================================================================ FAILURES =================================================================
__________________________________________________________ test_finetune_trainer __________________________________________________________
def test_finetune_trainer():
> output_dir = run_trainer(1, "12", MBART_TINY, 1)
examples/seq2seq/test_finetune_trainer.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
examples/seq2seq/test_finetune_trainer.py:105: in run_trainer
main()
examples/seq2seq/finetune_trainer.py:294: in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
src/transformers/trainer.py:583: in train
train_dataloader = self.get_train_dataloader()
src/transformers/trainer.py:386: in get_train_dataloader
train_sampler = self._get_train_sampler()
examples/seq2seq/seq2seq_trainer.py:108: in _get_train_sampler
self.args.per_device_train_batch_size, distributed=self.args.n_gpu > 1
examples/seq2seq/utils.py:156: in make_sortish_sampler
return DistributedSortishSampler(self, batch_size, shuffle=shuffle, **kwargs)
examples/seq2seq/utils.py:368: in __init__
num_replicas = dist.get_world_size()
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:582: in get_world_size
return _get_group_size(group)
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:196: in _get_group_size
_check_default_pg()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def _check_default_pg():
"""
Helper that checks if the default ProcessGroup has been initialized, with
assertion
"""
assert _default_pg is not None, \
> "Default process group is not initialized"
E AssertionError: Default process group is not initialized
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:187: AssertionError
_______________________________________________________ test_finetune_trainer_slow ________________________________________________________
@slow
def test_finetune_trainer_slow():
# TODO(SS): This will fail on devices with more than 1 GPU.
# There is a missing call to __init__process_group somewhere
> output_dir = run_trainer(eval_steps=2, max_len="128", model_name=MARIAN_MODEL, num_train_epochs=3)
examples/seq2seq/test_finetune_trainer.py:30:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
examples/seq2seq/test_finetune_trainer.py:105: in run_trainer
main()
examples/seq2seq/finetune_trainer.py:294: in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
src/transformers/trainer.py:583: in train
train_dataloader = self.get_train_dataloader()
src/transformers/trainer.py:386: in get_train_dataloader
train_sampler = self._get_train_sampler()
examples/seq2seq/seq2seq_trainer.py:108: in _get_train_sampler
self.args.per_device_train_batch_size, distributed=self.args.n_gpu > 1
examples/seq2seq/utils.py:156: in make_sortish_sampler
return DistributedSortishSampler(self, batch_size, shuffle=shuffle, **kwargs)
examples/seq2seq/utils.py:368: in __init__
num_replicas = dist.get_world_size()
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:582: in get_world_size
return _get_group_size(group)
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:196: in _get_group_size
_check_default_pg()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def _check_default_pg():
"""
Helper that checks if the default ProcessGroup has been initialized, with
assertion
"""
assert _default_pg is not None, \
> "Default process group is not initialized"
E AssertionError: Default process group is not initialized
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:187: AssertionError
========================================================= short test summary info =========================================================
FAILED examples/seq2seq/test_finetune_trainer.py::test_finetune_trainer - AssertionError: Default process group is not initialized
FAILED examples/seq2seq/test_finetune_trainer.py::test_finetune_trainer_slow - AssertionError: Default process group is not initialized
=========================================================== 2 failed in 11.51s ============================================================
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7833/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7832 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7832/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7832/comments | https://api.github.com/repos/huggingface/transformers/issues/7832/events | https://github.com/huggingface/transformers/issues/7832 | 722,593,875 | MDU6SXNzdWU3MjI1OTM4NzU= | 7,832 | [testing] test_trainer_callback.py cannot collect test class 'TestTrainerCallback' | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's just pytest thinking that class is a test because it begins by Test. Will fix, but this is very low priority for me as it runs as intended since pytest does not collect it. No idea about the other warning, but again, not important since the goal of that test is not to test gathered tensors, just the event flow.",
"Yes, of course, there is no rush. I can take care of some of them.\r\n\r\nIn general - the problem with stray warnings is that if there is a problem one has to read through warnings to see if something related is related and if there is a lot of noise in there this wastes a huge amount of time, sifting grain from chaff. Warnings really should be treated as errors, IMHO, or be removed if they aren't important.",
"I'm guessing you don't have TensorFlow installed then :-)",
"I do:\r\n\r\n```\r\n- `transformers` version: 3.3.1\r\n- Platform: Linux-4.15.0-118-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.5\r\n- PyTorch version (GPU?): 1.8.0.dev20201014 (True)\r\n- Tensorflow version (GPU?): 2.3.1 (True)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```",
"And you don't get a thousand warnings from it?",
"You must be alluding to tf generating a gazillion of useless logs and therefore what's another warning, correct?\r\n\r\nI'd say if we had resources to do so there should be no warnings emitted in our test suite. So that when one appears we know that something is not right and fix it. It's like a house with a broken window that doesn't get mended. Others start breaking more windows because there is one broken already.\r\n\r\nFWIW, most of the time, because our test suite is so noisy, I run pytest via `pytest --disable-warnings`, unless I'm trying to debug a failing test and I want to see if any warnings might be relevant to the issue at hand.",
"Moreover, I have been contributing for about 3 months now and I'm yet to be able to complete the test suite 100% error-free. There is always a handful of failing tests, which seem to be just fine on CIs. I'd like to work on that `USE_CUDA` elimination task, so I was hoping I could get `make test` to pass so that I'd have a working base line, but it's not quite there yet.",
"The problem is that I don't know how to catch TF useless (or them not having fixed deprecation warnings for other python libs so not so useless) warnings and keep others.\r\nAs for running the whole test suite, I've never tried on non-CPU (like what the CI does) so I don't know what's failing there or if it's important ;-)",
"> The problem is that I don't know how to catch TF useless warnings\r\n\r\nIt'd be quite easy to do if we are talking about tf loggers. I wrote a function that you can do this selectively for any of the offenders of choice (or all of them). https://github.com/huggingface/transformers/issues/3050#issuecomment-682167272\r\n\r\nBut really we are going back to the PR that wasn't received well, where I proposed to optionally silence the loggers for external modules for the test suite - https://github.com/huggingface/transformers/pull/6816\r\n\r\n> As for running the whole test suite, I've never tried on non-CPU (like what the CI does) so I don't know what's failing there or if it's important ;-)\r\n\r\nIt's probably important ;) Most applications of transformers are probably run on `gpus` - I could be wrong.\r\n\r\nAnd it'd help to have a normal CI that would run on GPU.\r\n\r\nNow that we are going to eliminate USE_CUDA, so that gpu will be used by default, it'll improve things a lot, as more developers will run into such problems and hopefully fix them.",
"I think the slow tests run the CI on GPU (even GPUs) so the common tests are all passing on that setup. You may have other issues linked to your env.",
"Only on scheduled CI which runs only a limited range of options. e.g. not testing newer pytorch releases."
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | something is wrong here:
```
$ pytest tests/test_trainer_callback.py
tests/test_trainer_callback.py:24
/mnt/nvme1/code/transformers-comet/tests/test_trainer_callback.py:24: PytestCollectionWarning: cannot collect test class 'TestTrainerCallback' because it has a __init__ constructor (from: tests/test_trainer_callback.py)
class TestTrainerCallback(TrainerCallback):
```
another one in the same test:
```
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:64: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
tests/test_trainer_callback.py::TrainerCallbackTest::test_event_flow
```
hint: these are in the warnings section of the test.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7832/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7831/comments | https://api.github.com/repos/huggingface/transformers/issues/7831/events | https://github.com/huggingface/transformers/issues/7831 | 722,584,720 | MDU6SXNzdWU3MjI1ODQ3MjA= | 7,831 | [RAG] RagSequenceForGeneration should not load "facebook/rag-token-nq" and RagTokenForGeneration also should not load "facebook/rag-sequence-nq" | {
"login": "lalitpagaria",
"id": 19303690,
"node_id": "MDQ6VXNlcjE5MzAzNjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/19303690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lalitpagaria",
"html_url": "https://github.com/lalitpagaria",
"followers_url": "https://api.github.com/users/lalitpagaria/followers",
"following_url": "https://api.github.com/users/lalitpagaria/following{/other_user}",
"gists_url": "https://api.github.com/users/lalitpagaria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lalitpagaria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lalitpagaria/subscriptions",
"organizations_url": "https://api.github.com/users/lalitpagaria/orgs",
"repos_url": "https://api.github.com/users/lalitpagaria/repos",
"events_url": "https://api.github.com/users/lalitpagaria/events{/privacy}",
"received_events_url": "https://api.github.com/users/lalitpagaria/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The model weights are actually 1-1 compatible with each other, so I see no reason why we should throw an exception here.",
"Hi Patrick, I also believe there are typos regarding the examples :\r\n\r\nOn \"sequence\" based : https://huggingface.co/facebook/rag-sequence-nq , the examples use \"token\" arguments e.g. \r\n\r\n```\r\nretriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True) \r\nmodel = RagSequenceForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever) \r\n```",
"@patrickvonplaten yes I think agree with you. I am closing this.",
"@patrickvonplaten \r\n\r\nI am seeing very weird behaviour. Various RAG generator and model combination giving me very different output.\r\nI am not able to understand why?\r\n\r\nCheck output of generators for **\"What is capital of Germany?\"** -\r\n```\r\n!pip install git+https://github.com/huggingface/transformers.git\r\n!pip install datasets\r\n!pip install faiss-cpu\r\n!pip install torch torchvision\r\n\r\nfrom transformers import RagTokenizer, RagRetriever, RagTokenForGeneration, RagSequenceForGeneration\r\nimport torch\r\nimport faiss\r\n\r\n\r\ntokenizer = RagTokenizer.from_pretrained(\"facebook/rag-sequence-nq\")\r\nretriever = RagRetriever.from_pretrained(\"facebook/rag-sequence-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\n\r\n\r\ninput_dict = tokenizer.prepare_seq2seq_batch(\"What is capital of Germany?\", return_tensors=\"pt\")\r\ninput_ids = input_dict[\"input_ids\"]\r\n\r\n# RagTokenForGeneration with \"facebook/rag-token-nq\"\r\nmodel = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\r\ngenerated_ids = model.generate(input_ids=input_ids)\r\ngenerated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\nprint(\"Result of model = \", generated_string)\r\n\r\n# RagSequenceForGeneration with \"facebook/rag-sequence-nq\"\r\nmodel = RagSequenceForGeneration.from_pretrained(\"facebook/rag-sequence-nq\", retriever=retriever)\r\ngenerated_ids = model.generate(input_ids=input_ids)\r\ngenerated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\nprint(\"Result of model = \", generated_string)\r\n\r\n# RagSequenceForGeneration with \"facebook/rag-token-nq\"\r\nmodel = RagSequenceForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\r\ngenerated_ids = model.generate(input_ids=input_ids)\r\ngenerated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\nprint(\"Result of model = \", generated_string)\r\n\r\n# RagTokenForGeneration with \"facebook/rag-sequence-nq\"\r\nmodel = RagTokenForGeneration.from_pretrained(\"facebook/rag-sequence-nq\", retriever=retriever)\r\ngenerated_ids = model.generate(input_ids=input_ids)\r\ngenerated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\nprint(\"Result of model = \", generated_string)\r\n```\r\nOutput of above run is (it is consistent behaviour) -\r\n```\r\nResult of model = [' german capital']\r\nResult of model = ['']\r\nResult of model = [' munich']\r\nResult of model = [' germany']\r\n```",
"> Hi Patrick, I also believe there are typos regarding the examples :\r\n> \r\n> On \"sequence\" based : https://huggingface.co/facebook/rag-sequence-nq , the examples use \"token\" arguments e.g.\r\n> \r\n> ```\r\n> retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True) \r\n> model = RagSequenceForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever) \r\n> ```\r\nShould be fixed - thanks :-) \r\nhttps://github.com/huggingface/transformers/blob/master/model_cards/facebook/rag-sequence-nq/README.md",
"> @patrickvonplaten\r\n> \r\n> I am seeing very weird behaviour. Various RAG generator and model combination giving me very different output.\r\n> I am not able to understand why?\r\n> \r\n> Check output of generators for **\"What is capital of Germany?\"** -\r\n> \r\n> ```\r\n> !pip install git+https://github.com/huggingface/transformers.git\r\n> !pip install datasets\r\n> !pip install faiss-cpu\r\n> !pip install torch torchvision\r\n> \r\n> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration, RagSequenceForGeneration\r\n> import torch\r\n> import faiss\r\n> \r\n> \r\n> tokenizer = RagTokenizer.from_pretrained(\"facebook/rag-sequence-nq\")\r\n> retriever = RagRetriever.from_pretrained(\"facebook/rag-sequence-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\n> \r\n> \r\n> input_dict = tokenizer.prepare_seq2seq_batch(\"What is capital of Germany?\", return_tensors=\"pt\")\r\n> input_ids = input_dict[\"input_ids\"]\r\n> \r\n> # RagTokenForGeneration with \"facebook/rag-token-nq\"\r\n> model = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\r\n> generated_ids = model.generate(input_ids=input_ids)\r\n> generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\n> print(\"Result of model = \", generated_string)\r\n> \r\n> # RagSequenceForGeneration with \"facebook/rag-sequence-nq\"\r\n> model = RagSequenceForGeneration.from_pretrained(\"facebook/rag-sequence-nq\", retriever=retriever)\r\n> generated_ids = model.generate(input_ids=input_ids)\r\n> generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\n> print(\"Result of model = \", generated_string)\r\n> \r\n> # RagSequenceForGeneration with \"facebook/rag-token-nq\"\r\n> model = RagSequenceForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\r\n> generated_ids = model.generate(input_ids=input_ids)\r\n> generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\n> print(\"Result of model = \", generated_string)\r\n> \r\n> # RagTokenForGeneration with \"facebook/rag-sequence-nq\"\r\n> model = RagTokenForGeneration.from_pretrained(\"facebook/rag-sequence-nq\", retriever=retriever)\r\n> generated_ids = model.generate(input_ids=input_ids)\r\n> generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\n> print(\"Result of model = \", generated_string)\r\n> ```\r\n> \r\n> Output of above run is (it is consistent behaviour) -\r\n> \r\n> ```\r\n> Result of model = [' german capital']\r\n> Result of model = ['']\r\n> Result of model = [' munich']\r\n> Result of model = [' germany']\r\n> ```\r\n\r\nHey @lalitpagaria , the models are different in generating the answers - the results are not unexpected :-) If you take a closer look into the code you can see that both models expect the exact same weights, but have different generate() functions",
"Thanks @patrickvonplaten \nI will play with few parameters of RegConfig. "
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): RAG
The problem arises when using:
* [X] the official example scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
## To reproduce
Following usage of token and sequence models should not be allowed, it may give unintended result in forward pass-
```
# RagSequenceForGeneration with "facebook/rag-token-nq"
model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
# RagTokenForGeneration with "facebook/rag-sequence-nq"
model = RagTokenForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever)
```
Also please correct example at https://huggingface.co/transformers/master/model_doc/rag.html#ragsequenceforgeneration
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Above usage should throw exception because both the models are incompatible with the each other. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7831/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7830/comments | https://api.github.com/repos/huggingface/transformers/issues/7830/events | https://github.com/huggingface/transformers/pull/7830 | 722,584,314 | MDExOlB1bGxSZXF1ZXN0NTA0MzA4OTA3 | 7,830 | fix wandb/comet problems | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for this! W're working on a refactor on the comet_ml SDK to move some of the complications from here over to there.",
"@dsblank, also while you're at it, would it be possible to mend these in both projects. Getting these w/ py38 in the test suite. Thank you!\r\n\r\n```\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/utils.py:30\r\n /home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/utils.py:30: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n from collections import Mapping, defaultdict\r\n\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19\r\n /home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/util.py:35\r\n /home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/util.py:35: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n from collections import namedtuple, Mapping, Sequence\r\n\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/vendor/graphql-core-1.1/graphql/type/directives.py:55\r\n /home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/vendor/graphql-core-1.1/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.'\r\n```",
"@stas00 Sure, I'll take a look. I know some of these are caused by our support of Python 2.7, but perhaps there is a way to hide those.",
"Much appreciated, @dsblank!"
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | As discussed in https://github.com/huggingface/transformers/issues/7821
* handle the case where `comet_ml` is installed but not configured
* fix error in wandb code:
```
> combined_dict = {**model.config.to_dict(), **combined_dict}
E AttributeError: 'NoneType' object has no attribute 'to_dict'
```
Fixes: #7821
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7830/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7830",
"html_url": "https://github.com/huggingface/transformers/pull/7830",
"diff_url": "https://github.com/huggingface/transformers/pull/7830.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7830.patch",
"merged_at": 1602789804000
} |
https://api.github.com/repos/huggingface/transformers/issues/7829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7829/comments | https://api.github.com/repos/huggingface/transformers/issues/7829/events | https://github.com/huggingface/transformers/issues/7829 | 722,581,213 | MDU6SXNzdWU3MjI1ODEyMTM= | 7,829 | [RAG] RagSequenceForGeneration: Running "retriever separately example" giving error | {
"login": "lalitpagaria",
"id": 19303690,
"node_id": "MDQ6VXNlcjE5MzAzNjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/19303690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lalitpagaria",
"html_url": "https://github.com/lalitpagaria",
"followers_url": "https://api.github.com/users/lalitpagaria/followers",
"following_url": "https://api.github.com/users/lalitpagaria/following{/other_user}",
"gists_url": "https://api.github.com/users/lalitpagaria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lalitpagaria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lalitpagaria/subscriptions",
"organizations_url": "https://api.github.com/users/lalitpagaria/orgs",
"repos_url": "https://api.github.com/users/lalitpagaria/repos",
"events_url": "https://api.github.com/users/lalitpagaria/events{/privacy}",
"received_events_url": "https://api.github.com/users/lalitpagaria/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @lalitpagaria - spot on! Thanks a lot for your issue, you're 100% correct here. \r\n\r\nI actually noticed that the `RagSequence` generate function is a bit more complex so that we cannot do the decomposed (embed, retrieve, generate) example here...\r\n\r\nThe PR linked to the issue removes the use case from the examples and fixes the one for `RagToken...`.",
"UPDATE: @patrickvonplaten Sorry for my miss-understanding. Yes without calling generate directly fixed this with your PR. Thanks you very much for fix.\r\n\r\n-------------------\r\n@patrickvonplaten Thank you for update. ~~I tried your changes on my code snippets and still got same error. If you see my example I am passing `context_input_ids`~~\r\n"
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@patrickvonplaten @LysandreJik
## Information
Model I am using (Bert, XLNet ...): RAG
The problem arises when using:
* [X] the official example scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: `dummy_dataset`
## To reproduce
Steps to reproduce the behavior:
1. Execute code snippets provided (Partially modified example script from https://huggingface.co/transformers/master/model_doc/rag.html)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Code snippets:
```
!pip install git+https://github.com/huggingface/transformers.git
!pip install datasets
!pip install faiss-cpu
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration, RagSequenceForGeneration
import torch
import faiss
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", use_dummy_dataset=True)
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
input_dict = tokenizer.prepare_seq2seq_batch("How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="pt")
input_ids = input_dict["input_ids"]
# Caling retriever seperately
question_hidden_states = model.question_encoder(input_ids)[0]
# 2. Retrieve
docs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors="pt")
print(docs_dict)
doc_scores = torch.bmm(question_hidden_states.unsqueeze(1), docs_dict["retrieved_doc_embeds"].float().transpose(1, 2)).squeeze(1)
# 3. Forward to generator
outputs = model.generate(input_ids=input_ids, context_input_ids=docs_dict["context_input_ids"], context_attention_mask=docs_dict["context_attention_mask"], doc_scores=doc_scores)
generated_string = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(generated_string)
```
Stacktrace:
```
AssertionError Traceback (most recent call last)
<ipython-input-5-9f622b1f6353> in <module>()
7 doc_scores = torch.bmm(question_hidden_states.unsqueeze(1), docs_dict["retrieved_doc_embeds"].float().transpose(1, 2)).squeeze(1)
8 # 3. Forward to generator
----> 9 outputs = model.generate(input_ids=input_ids, context_input_ids=docs_dict["context_input_ids"], context_attention_mask=docs_dict["context_attention_mask"], doc_scores=doc_scores)
10 generated_string = tokenizer.batch_decode(outputs, skip_special_tokens=True)
11 print(generated_string)
5 frames
/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
13 def decorate_context(*args, **kwargs):
14 with self:
---> 15 return func(*args, **kwargs)
16 return decorate_context
17
/usr/local/lib/python3.6/dist-packages/transformers/modeling_rag.py in generate(self, input_ids, attention_mask, context_input_ids, do_deduplication, num_return_sequences, num_beams, **kwargs)
902 # then, run model forwards to get nll scores:
903 new_input_ids = input_ids[index : index + 1].repeat(len(output_sequences), 1)
--> 904 outputs = self(new_input_ids, labels=output_sequences, exclude_bos_score=True)
905 top_cand_inds = (-outputs["loss"]).topk(num_doc_return_sequences)[1]
906
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/modeling_rag.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, past_key_values, context_input_ids, context_attention_mask, doc_scores, use_cache, output_attentions, output_hidden_states, output_retrieved, exclude_bos_score, reduce_loss, labels, **kwargs)
767 output_attentions=output_attentions,
768 output_hidden_states=output_hidden_states,
--> 769 output_retrieved=output_retrieved,
770 )
771
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/modeling_rag.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, past_key_values, doc_scores, context_input_ids, context_attention_mask, use_cache, output_attentions, output_hidden_states, output_retrieved)
589 assert (
590 context_input_ids is not None
--> 591 ), "Make sure that `context_input_ids` are passed, if no `retriever` is set. Alternatively, you can set a retriever using the `set_retriever(...)` function."
592 assert (
593 context_attention_mask is not None
AssertionError: Make sure that `context_input_ids` are passed, if no `retriever` is set. Alternatively, you can set a retriever using the `set_retriever(...)` function.
```
I suspect `context_input_ids` is not passed to `forward` method. And if model is not initialised with retriever then `forward` function complain about missing `context_input_ids` or `retriever`. Referring to following piece of code in `RagSequenceForGeneration` class and `generator` function.
```
# then, run model forwards to get nll scores:
new_input_ids = input_ids[index : index + 1].repeat(len(output_sequences), 1)
outputs = self(new_input_ids, labels=output_sequences, exclude_bos_score=True)
top_cand_inds = (-outputs["loss"]).topk(num_doc_return_sequences)[1]
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It should work as intended as `RagTokenForGeneration` do. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7829/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7828/comments | https://api.github.com/repos/huggingface/transformers/issues/7828/events | https://github.com/huggingface/transformers/pull/7828 | 722,541,134 | MDExOlB1bGxSZXF1ZXN0NTA0MjczOTkx | 7,828 | fix: ignore padding tokens in Bart loss | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"I don't seem to be able to add reviewers but I guess this would fall into the domain of @sshleifer. ",
"Thx for the contribution!\r\n\r\nFYI we have a seq2seq finetuner with this bugfix.\r\nhttps://github.com/huggingface/transformers/blob/master/examples/seq2seq/seq2seq_trainer.py#L43\r\n\r\nI worked on this at some point and thought I had fixed it.\r\n\r\nAny issue with merging this @patil-suraj @patrickvonplaten ?",
"LGTM. but should be documented. seen few notebooks where people are setting pad tokens to -100 in labels . We should change this for T5 as well",
"> FYI we have a seq2seq finetuner with this bugfix.\r\n> https://github.com/huggingface/transformers/blob/master/examples/seq2seq/seq2seq_trainer.py#L43\r\n\r\nThanks, I did not see that! With the fix in the model I was able to train Pegasus with the standard Trainer.",
"> LGTM. but should be documented. seen few notebooks where people are setting pad tokens to -100 in labels . We should change this for T5 as well\r\n\r\nGood point, I remember that through me off because it explicitly says -100 works in the model's [docstring](https://github.com/huggingface/transformers/blob/4dbca500226e27be21dbb0eb08117dfd0c5264b3/src/transformers/modeling_bart.py#L1048).",
"I updated the docstring and added two assertions. Are these the assertions you were looking for @patil-suraj ?",
"I am not in favor of this PR to be honest. \r\n\r\n1) By replacing the default `ignore_idx = -100` to `ignore_idx = pad_token_id` we constrain the user to be able to only ignore padding tokens but no other tokens. Previously a user could simply set tokens that should be ignored to -100 (pad_token, but in addition all other tokens). After this PR a user would not be able to ignore other tokens that the pad_token anymore. \r\n\r\n2) This is not consistent with other models, which always only use -100 to ignore the loss\r\n\r\n3) This is just a convenience function that should not be handled directly in the model itself. I'm 100% fine if this is handled in the `Seq2SeqTrainer` or `Trainer`\r\n\r\nAlready discussed offline with @sshleifer. \r\n\r\nWhat are your thoughts on this @LysandreJik @sgugger @thomwolf ?",
"I agree that it would be nice to have a uniform pattern across the model architectures allowing to use the models interchangeably.\r\n\r\nIt seems there is some work needed to make this allow `-100` tokens in `Bart` since they break the embedding process in the forward pass:\r\n```Python\r\n/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, output_attentions, output_hidden_states, return_dict)\r\n 332 attention_mask = invert_mask(attention_mask)\r\n 333 \r\n--> 334 inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale\r\n 335 embed_pos = self.embed_positions(input_ids)\r\n 336 x = inputs_embeds + embed_pos\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 530 result = self._slow_forward(*input, **kwargs)\r\n 531 else:\r\n--> 532 result = self.forward(*input, **kwargs)\r\n 533 for hook in self._forward_hooks.values():\r\n 534 hook_result = hook(self, input, result)\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py in forward(self, input)\r\n 112 return F.embedding(\r\n 113 input, self.weight, self.padding_idx, self.max_norm,\r\n--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n 115 \r\n 116 def extra_repr(self):\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)\r\n 1482 # remove once script supports set_grad_enabled\r\n 1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)\r\n-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n 1485 \r\n 1486 \r\n\r\nRuntimeError: index out of range: Tried to access index -100 out of table with 50263 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418\r\n```",
"@lvwerra I think we should ignore `pad_token_id`, but if we go the -100 route it should be fine if you pass `decoder_input_ids` to BART? I don't see a call on your traceback so can't be sure.",
"@sshleifer I tried to pass `-100` in the `input_ids`:\r\n```Python\r\nfrom transformers import BartForConditionalGeneration, BartTokenizer\r\nimport torch\r\n\r\nmodel = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')\r\ntokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')\r\n\r\nmodel(torch.tensor([[0, -100]]))\r\n```\r\n\r\nBut I get the same error with:\r\n```Python\r\nt =model(torch.tensor([[0, 1]]), decoder_input_ids=torch.tensor([[0, 0, -100]]))\r\n```\r\n\r\nIt seems that the error comes from the line `x = self.embed_tokens(input_ids) * self.embed_scale` which is called in both forward passes of the encoder and decoder modules. How do you usually deal with this? \r\n\r\n",
"I think to successfully implement the -100 strategy (Which I have never done),\r\nyou have to pass labels that contain -100 and decoder_input_ids that don't contain -100.",
"Yes I would tend to agree with @patrickvonplaten, I think that the usual philosophy of the lib is that we let the user handle this himself and have clear and simple exemple which shows that you should replace pad_token ids with ignore index in the labels.",
"I somehow missed the notification when @patrickvonplaten asked for advice earlier but I agree with what he said. We only handle a basic loss computation inside the model. We refused PRs to add weights for cross-entropy recently, for the same reason @thomwolf just pointed out: anything fancier should be done by the user themself, as we can't support every use case.\r\n\r\nFor the `Trainer` and `Seq2SeqTrainer`, there is any easy way to handle a custom loss computation, by subclassing and overriding the `compute_loss` function (see the [example in the docs](https://huggingface.co/transformers/main_classes/trainer.html).",
"Thanks for the feedback @thomwolf & @sgugger! From a user perspective, I think it would be great if one could use a model in combination with the `Trainer` for the model's standard task (e.g. conditional generation) without customisation or subclassing. If the default loss function of the model does not support that, then for which other use-cases would the default behaviour (not ignoring padding tokens in the labels) still be useful? Customisation already requires advanced knowledge of the inner workings of the `Trainer` class which not all users might have. If a user wants to do something more sophisticated than the standard task that requires modification of this behaviour, they could still write a custom `compute_loss` function.\r\n\r\nIf you want to go the `-100`-route the user has to \"manually\" right shift the `labels` tokens to create the `decoder_input_ids ` and then replace the `pad_token_id` in the `labels` with `-100`. As far as I can tell, this is always required to train the model for conditional generation, so I am wondering why it should not be the default behaviour inside the model `BartForConditionalGeneration`? Otherwise, the defaults are never used in practice and customisation is always required to train the model. \r\n\r\n",
"Hey @lvwerra,\r\n\r\nI think the main arguments against ignoring the `pad_token_id` inside `BartForConditionalGeneration` is that: \r\n\r\n1. We cannot allow all models to have this behavior because some of them do not have a `pad_token_id`, *e.g.* GPT2. Because consistency between models is one of our top priorities, it is not a good idea to use -100 for some models and `pad_token_id` for others.\r\n\r\n2. The are use cases where users want to not only ignore the padding token, but also other tokens, *e.g.* the eos token id. In this case it would be cleaner to set both pad and eos to -100 and ignore those tokens than setting the eos token to the pad token.",
"Ok, so if I understand correctly the minimal example to train a Bart model given a `dataset` object with columns `'text'` and `'summary'` would be to apply the following function (e.g. with `.map()`) before passing the model and the dataset to the `Trainer`:\r\n\r\n```Python\r\nfrom transformers.modeling_bart import shift_tokens_right\r\n\r\ndef convert_to_features(example_batch):\r\n input_encodings = tokenizer.batch_encode_plus(example_batch['text'], pad_to_max_length=True)\r\n target_encodings = tokenizer.batch_encode_plus(example_batch['summary'], pad_to_max_length=True)\r\n \r\n labels = target_encodings['input_ids']\r\n decoder_input_ids = shift_tokens_right(labels, model.config.pad_token_id)\r\n labels[labels[:, :] == 0] = -100\r\n \r\n encodings = {\r\n 'input_ids': input_encodings['input_ids'],\r\n 'attention_mask': input_encodings['attention_mask'],\r\n 'decoder_input_ids': decoder_input_ids,\r\n 'labels': labels,\r\n }\r\n\r\n return encodings\r\n```\r\nIt took me quite a while reading examples and code reading to figure this out. Not only the thing with the padding tokens and -100 but also the difference between `decoder_input_ids` and `labels`. I am more than happy to update the docs to save the next person some time, since this seems not to be an edge case but the minimal work required to train Bart for conditional generation. Is there a good place to point this out? \r\n",
"you could write a forums post and link to it from bart.rst?"
] | 1,602 | 1,606 | 1,606 | MEMBER | null | # What does this PR do?
There is a discrepancy between the [fine-tuning script](https://github.com/huggingface/transformers/blob/e7aa64838cc604abf7a49e69ca0ffe7af683d8ca/examples/seq2seq/finetune.py#L153) and the [BartForConditionalGeneration](https://github.com/huggingface/transformers/blob/e7aa64838cc604abf7a49e69ca0ffe7af683d8ca/src/transformers/modeling_bart.py#L1113) which is also noted in the comments.
From `examples/seq2seq/finetune.py`:
```python
# Same behavior as modeling_bart.py, besides ignoring pad_token_id
ce_loss_fct = torch.nn.CrossEntropyLoss(ignore_index=pad_token_id)
```
From `transformers/src/modeling_bart.py`:
```python
loss_fct = CrossEntropyLoss()
# TODO(SS): do we need to ignore pad tokens in labels?
```
Training with the `Trainer` and `BartForConditionalGeneration` results in a model that produces garbled text (lots of repetitions and no coherence). Adding the `ignore_index=self.config.pad_token_id` in the `CrossEntropyLoss` resolves the issue.
Besides a before and after run I did not study the behaviour in a systematic way since training the model requires a significant amount of time and compute. If you would like to see more testing let me know what you think is the best way to test this thoroughly.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7828/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7828",
"html_url": "https://github.com/huggingface/transformers/pull/7828",
"diff_url": "https://github.com/huggingface/transformers/pull/7828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7828.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7827/comments | https://api.github.com/repos/huggingface/transformers/issues/7827/events | https://github.com/huggingface/transformers/issues/7827 | 722,525,056 | MDU6SXNzdWU3MjI1MjUwNTY= | 7,827 | Do I need to apply the softmax function to my logit before calculating the CrossEntropyLoss? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"There is no need to add the nn.softmax function. Pls. refer to the [pytorch document](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html?highlight=crossentropyloss#torch.nn.CrossEntropyLoss).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | Hello, I am trying to compute the CrossEntropyLoss directly by using this code:
```
loss_fct = CrossEntropyLoss()
mc_loss = loss_fct(reshaped_logits, mc_labels)
```
If the reshaped_logits contain the logit values before softmax, should I apply `nn.softmax` function before I do `loss_fct(reshaped_logits, mc_labels)`? Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7827/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7826/comments | https://api.github.com/repos/huggingface/transformers/issues/7826/events | https://github.com/huggingface/transformers/pull/7826 | 722,514,395 | MDExOlB1bGxSZXF1ZXN0NTA0MjUyMjQ5 | 7,826 | [Pipelines] Fix links to model lists | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7826/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7826",
"html_url": "https://github.com/huggingface/transformers/pull/7826",
"diff_url": "https://github.com/huggingface/transformers/pull/7826.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7826.patch",
"merged_at": 1602831423000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/7825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7825/comments | https://api.github.com/repos/huggingface/transformers/issues/7825/events | https://github.com/huggingface/transformers/issues/7825 | 722,509,170 | MDU6SXNzdWU3MjI1MDkxNzA= | 7,825 | RFC: Move `_NoLayerEmbedTokens` to modeling_tf_utils.py | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`TFSharedEmbeddings` is in `modeling_tf_utils.py`\r\nThis seems similar.",
"I'm fine with moving this into `modeling_tf_utils.py`"
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | It has 0 computation and will be needed in many places. Here is the code:
```python
class _NoLayerEmbedTokens:
"""
this class wraps a the TFSharedEmbeddingTokens layer into a python 'no-keras-layer'
class to avoid problem with weight restoring. Also it makes sure that the layer is
called from the correct scope to avoid problem with saving/storing the correct weights
"""
def __init__(self, layer, abs_scope_name=None):
self._layer = layer
self._abs_scope_name = abs_scope_name
def call(self, inputs, mode="embedding"):
if self._abs_scope_name is None:
return self._layer.call(inputs, mode)
# if an abs scope name is given to the embedding variable, call variable from absolute scope
with tf.compat.v1.variable_scope(self._abs_scope_name, auxiliary_name_scope=False) as abs_scope_name:
with tf.name_scope(abs_scope_name.original_name_scope):
return self._layer.call(inputs, mode)
def __call__(self, inputs, mode="embedding"):
if self._abs_scope_name is None:
return self._layer(inputs, mode)
# if an abs scope name is given to the embedding variable, call variable from absolute scope
with tf.compat.v1.variable_scope(self._abs_scope_name, auxiliary_name_scope=False) as abs_scope_name:
with tf.name_scope(abs_scope_name.original_name_scope):
return self._layer(inputs, mode)
```
I will copy it into tfbart to avoid getting bogged down in debate, but wdyt @patrickvonplaten ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7825/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7824/comments | https://api.github.com/repos/huggingface/transformers/issues/7824/events | https://github.com/huggingface/transformers/issues/7824 | 722,496,085 | MDU6SXNzdWU3MjI0OTYwODU= | 7,824 | Bart Caching: do we need encoder outputs after step 1? | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Very good point! But I'm actually not sure if the memory consumption of `num_layers * cached k, v states` already makes memory consumption of `encoder_outputs` negligible. Would be cool to try out.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,609 | 1,609 | CONTRIBUTOR | null | since projected cross attention k,v are cached, this doesn't seem like it's actually needed.
Maybe we could save some mem if we deleted them after we were done. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7824/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7824/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7823/comments | https://api.github.com/repos/huggingface/transformers/issues/7823/events | https://github.com/huggingface/transformers/issues/7823 | 722,477,156 | MDU6SXNzdWU3MjI0NzcxNTY= | 7,823 | Support for custom data_collator in Trainer.train() with datasets.Dataset | {
"login": "kaletap",
"id": 25740957,
"node_id": "MDQ6VXNlcjI1NzQwOTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/25740957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaletap",
"html_url": "https://github.com/kaletap",
"followers_url": "https://api.github.com/users/kaletap/followers",
"following_url": "https://api.github.com/users/kaletap/following{/other_user}",
"gists_url": "https://api.github.com/users/kaletap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaletap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaletap/subscriptions",
"organizations_url": "https://api.github.com/users/kaletap/orgs",
"repos_url": "https://api.github.com/users/kaletap/repos",
"events_url": "https://api.github.com/users/kaletap/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaletap/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just set `remove_unused_columns=False` in your `TrainingArguments` to disable that behavior.",
"Great, I completely missed that! Thanks."
] | 1,602 | 1,602 | 1,602 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Currently (transformers==3.3.1) Trainer removes unknown columns (not present in forward method of a model) from datasets.Dataset object. It prevents using custom DataCollator in `.train` method since it doesn't have columns that one would want to use.
I would be interested in an option to not remove unknown columns and allow user to handle them in DataCollator (or provide their own DataLoaders to the train method).
## Motivation
The example code I was trying to use for training looks like this:
```
from typing import List
from datasets import load_dataset
from transformers import RobertaForSequenceClassification, RobertaTokenizer, Trainer
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForSequenceClassification.from_pretrained('roberta-base', return_dict=True)
train_dataset = load_dataset("yelp_polarity", split="train")
val_dataset = load_dataset("yelp_polarity", split="test")
class DataCollator:
def __init__(self, tokenizer):
self.tokenizer = tokenizer
def __call__(self, examples: List[dict]):
labels = [example['label'] for example in examples]
texts = [example['text'] for example in examples]
tokenizer_output = self.tokenizer(texts, truncation=True, padding=True)
tokenizer_output['input_ids'] = torch.tensor(tokenizer_output['input_ids'])
tokenizer_output['attention_mask'] = torch.tensor(tokenizer_output['attention_mask'])
output_dict = dict(labels=labels, **tokenizer_output)
return output_dict
data_collator = DataCollator(tokenizer)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset, # evaluation dataset
data_collator=data_collator # problem with removing unwanted columns by transformers.Trainer code. Doesn't really work, is it a bug?
)
trainer.train()
```
(with `trainer.train()` not working)
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
Before trying to fix things, I would be interested to learn if there is a suggested alternative way of doing that?
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7823/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7822 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7822/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7822/comments | https://api.github.com/repos/huggingface/transformers/issues/7822/events | https://github.com/huggingface/transformers/issues/7822 | 722,475,538 | MDU6SXNzdWU3MjI0NzU1Mzg= | 7,822 | [testing] test_modeling_deberta.py is failing | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Good to know that this we need to update this code for version 1.8.0. Thanks!",
"Ah, right, I didn't notice that it was nightly specific - if I'm not mistaken this is really for 1.7 I think - it's hard to tell though.",
"Ah, possible! I will take a look :)",
"Was fixed in https://github.com/huggingface/transformers/pull/8057"
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | ```
USE_CUDA=1 pytest tests/test_modeling_deberta.py
```
```
====================================================================== test session starts =======================================================================
platform linux -- Python 3.8.5, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /mnt/nvme1/code/huggingface/transformers-master
plugins: xdist-2.1.0, forked-1.3.0, typeguard-2.9.1, flake8-1.0.6, hydra-core-1.0.0, cov-2.10.1, instafail-0.4.2, flakefinder-1.0.0
collected 33 items
tests/test_modeling_deberta.py ...F.FssFs............s..F....sss [100%]
============================================================================ FAILURES ============================================================================
______________________________________________________________ DebertaModelTest.test_deberta_model _______________________________________________________________
self = <tests.test_modeling_deberta.DebertaModelTest testMethod=test_deberta_model>
def test_deberta_model(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
> self.model_tester.create_and_check_deberta_model(*config_and_inputs)
tests/test_modeling_deberta.py:210:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_deberta.py:159: in create_and_check_deberta_model
sequence_output = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids)[0]
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:891: in forward
encoder_outputs = self.encoder(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:401: in forward
hidden_states = layer_module(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:324: in forward
attention_output = self.attention(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:257: in forward
self_output = self.self(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = DisentangledSelfAttention(
(in_proj): Linear(in_features=32, out_features=96, bias=False)
(dropout): StableDropout()
)
hidden_states = tensor([[[-0.1865, 1.6041, -1.3552, ..., -1.2947, -0.9667, 0.9630],
[ 1.1352, 0.1829, 0.3669, ..., -2.4... [-0.3743, -0.0251, -0.8356, ..., -0.9786, -1.3914, -1.9630]]],
device='cuda:0', grad_fn=<MulBackward0>)
attention_mask = tensor([[[[1, 1, 1, 0, 1, 1, 0],
[1, 1, 1, 0, 1, 1, 0],
[1, 1, 1, 0, 1, 1, 0],
[0, 0, 0,..., 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 0, 0, 1]]]], device='cuda:0', dtype=torch.uint8)
return_att = False, query_states = None, relative_pos = None, rel_embeddings = None
def forward(
self,
hidden_states,
attention_mask,
return_att=False,
query_states=None,
relative_pos=None,
rel_embeddings=None,
):
"""Call the module
Args:
hidden_states (:obj:`torch.FloatTensor`):
Input states to the module usally the output from previous layer, it will be the Q,K and V in `Attention(Q,K,V)`
attention_mask (:obj:`torch.ByteTensor`):
An attention mask matrix of shape [`B`, `N`, `N`] where `B` is the batch size, `N` is the maxium sequence length in which element [i,j] = `1` means the `i` th token in the input can attend to the `j` th token.
return_att (:obj:`bool`, optional):
Whether return the attention maxitrix.
query_states (:obj:`torch.FloatTensor`, optional):
The `Q` state in `Attention(Q,K,V)`.
relative_pos (:obj:`torch.LongTensor`):
The relative position encoding between the tokens in the sequence. It's of shape [`B`, `N`, `N`] with values ranging in [`-max_relative_positions`, `max_relative_positions`].
rel_embeddings (:obj:`torch.FloatTensor`):
The embedding of relative distances. It's a tensor of shape [:math:`2 \\times \\text{max_relative_positions}`, `hidden_size`].
"""
if query_states is None:
qp = self.in_proj(hidden_states) # .split(self.all_head_size, dim=-1)
query_layer, key_layer, value_layer = self.transpose_for_scores(qp).chunk(3, dim=-1)
else:
def linear(w, b, x):
if b is not None:
return torch.matmul(x, w.t()) + b.t()
else:
return torch.matmul(x, w.t()) # + b.t()
ws = self.in_proj.weight.chunk(self.num_attention_heads * 3, dim=0)
qkvw = [torch.cat([ws[i * 3 + k] for i in range(self.num_attention_heads)], dim=0) for k in range(3)]
qkvb = [None] * 3
q = linear(qkvw[0], qkvb[0], query_states)
k, v = [linear(qkvw[i], qkvb[i], hidden_states) for i in range(1, 3)]
query_layer, key_layer, value_layer = [self.transpose_for_scores(x) for x in [q, k, v]]
query_layer += self.transpose_for_scores(self.q_bias[None, None, :])
> value_layer += self.transpose_for_scores(self.v_bias[None, None, :])
E RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/autograd/variable.cpp":352, please report a bug to PyTorch.
src/transformers/modeling_deberta.py:575: RuntimeError
__________________________________________________________ DebertaModelTest.test_feed_forward_chunking ___________________________________________________________
self = <tests.test_modeling_deberta.DebertaModelTest testMethod=test_feed_forward_chunking>
def test_feed_forward_chunking(self):
(
original_config,
inputs_dict,
) = self.model_tester.prepare_config_and_inputs_for_common()
for model_class in self.all_model_classes:
torch.manual_seed(0)
config = copy.deepcopy(original_config)
model = model_class(config)
model.to(torch_device)
model.eval()
> hidden_states_no_chunk = model(**self._prepare_for_class(inputs_dict, model_class))[0]
tests/test_modeling_common.py:617:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:891: in forward
encoder_outputs = self.encoder(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:401: in forward
hidden_states = layer_module(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:324: in forward
attention_output = self.attention(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:257: in forward
self_output = self.self(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = DisentangledSelfAttention(
(in_proj): Linear(in_features=32, out_features=96, bias=False)
(dropout): StableDropout()
)
hidden_states = tensor([[[-0.1510, 1.7101, -1.2962, ..., 1.5861, -0.6766, 0.3315],
[ 0.0000, -0.0000, -0.0000, ..., 0.0... [-0.0000, -0.0000, -0.0000, ..., 0.0000, 0.0000, 0.0000]]],
device='cuda:0', grad_fn=<MulBackward0>)
attention_mask = tensor([[[[1, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[1, 0, 0,..., 0, 1, 1, 0],
[1, 0, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0]]]], device='cuda:0', dtype=torch.uint8)
return_att = False, query_states = None, relative_pos = None, rel_embeddings = None
def forward(
self,
hidden_states,
attention_mask,
return_att=False,
query_states=None,
relative_pos=None,
rel_embeddings=None,
):
"""Call the module
Args:
hidden_states (:obj:`torch.FloatTensor`):
Input states to the module usally the output from previous layer, it will be the Q,K and V in `Attention(Q,K,V)`
attention_mask (:obj:`torch.ByteTensor`):
An attention mask matrix of shape [`B`, `N`, `N`] where `B` is the batch size, `N` is the maxium sequence length in which element [i,j] = `1` means the `i` th token in the input can attend to the `j` th token.
return_att (:obj:`bool`, optional):
Whether return the attention maxitrix.
query_states (:obj:`torch.FloatTensor`, optional):
The `Q` state in `Attention(Q,K,V)`.
relative_pos (:obj:`torch.LongTensor`):
The relative position encoding between the tokens in the sequence. It's of shape [`B`, `N`, `N`] with values ranging in [`-max_relative_positions`, `max_relative_positions`].
rel_embeddings (:obj:`torch.FloatTensor`):
The embedding of relative distances. It's a tensor of shape [:math:`2 \\times \\text{max_relative_positions}`, `hidden_size`].
"""
if query_states is None:
qp = self.in_proj(hidden_states) # .split(self.all_head_size, dim=-1)
query_layer, key_layer, value_layer = self.transpose_for_scores(qp).chunk(3, dim=-1)
else:
def linear(w, b, x):
if b is not None:
return torch.matmul(x, w.t()) + b.t()
else:
return torch.matmul(x, w.t()) # + b.t()
ws = self.in_proj.weight.chunk(self.num_attention_heads * 3, dim=0)
qkvw = [torch.cat([ws[i * 3 + k] for i in range(self.num_attention_heads)], dim=0) for k in range(3)]
qkvb = [None] * 3
q = linear(qkvw[0], qkvb[0], query_states)
k, v = [linear(qkvw[i], qkvb[i], hidden_states) for i in range(1, 3)]
query_layer, key_layer, value_layer = [self.transpose_for_scores(x) for x in [q, k, v]]
query_layer += self.transpose_for_scores(self.q_bias[None, None, :])
> value_layer += self.transpose_for_scores(self.v_bias[None, None, :])
E RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/autograd/variable.cpp":352, please report a bug to PyTorch.
src/transformers/modeling_deberta.py:575: RuntimeError
_______________________________________________________ DebertaModelTest.test_for_sequence_classification ________________________________________________________
self = <tests.test_modeling_deberta.DebertaModelTest testMethod=test_for_sequence_classification>
def test_for_sequence_classification(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
> self.model_tester.create_and_check_deberta_for_sequence_classification(*config_and_inputs)
tests/test_modeling_deberta.py:214:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_deberta.py:177: in create_and_check_deberta_for_sequence_classification
loss, logits = model(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:987: in forward
outputs = self.deberta(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:891: in forward
encoder_outputs = self.encoder(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:401: in forward
hidden_states = layer_module(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:324: in forward
attention_output = self.attention(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:257: in forward
self_output = self.self(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = DisentangledSelfAttention(
(in_proj): Linear(in_features=32, out_features=96, bias=False)
(dropout): StableDropout()
)
hidden_states = tensor([[[-6.2643e-01, -1.6159e+00, 1.4385e+00, ..., 1.2083e+00,
-7.2520e-02, 4.6405e-01],
[ 7....62e-01, 6.8716e-01, ..., -1.0658e+00,
-9.1048e-01, 4.0098e-01]]], device='cuda:0', grad_fn=<MulBackward0>)
attention_mask = tensor([[[[1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 0, 0],
[1, 1, 1,..., 0, 0, 0, 0],
[1, 1, 1, 1, 0, 1, 1],
[1, 1, 1, 1, 0, 1, 1]]]], device='cuda:0', dtype=torch.uint8)
return_att = False, query_states = None, relative_pos = None, rel_embeddings = None
def forward(
self,
hidden_states,
attention_mask,
return_att=False,
query_states=None,
relative_pos=None,
rel_embeddings=None,
):
"""Call the module
Args:
hidden_states (:obj:`torch.FloatTensor`):
Input states to the module usally the output from previous layer, it will be the Q,K and V in `Attention(Q,K,V)`
attention_mask (:obj:`torch.ByteTensor`):
An attention mask matrix of shape [`B`, `N`, `N`] where `B` is the batch size, `N` is the maxium sequence length in which element [i,j] = `1` means the `i` th token in the input can attend to the `j` th token.
return_att (:obj:`bool`, optional):
Whether return the attention maxitrix.
query_states (:obj:`torch.FloatTensor`, optional):
The `Q` state in `Attention(Q,K,V)`.
relative_pos (:obj:`torch.LongTensor`):
The relative position encoding between the tokens in the sequence. It's of shape [`B`, `N`, `N`] with values ranging in [`-max_relative_positions`, `max_relative_positions`].
rel_embeddings (:obj:`torch.FloatTensor`):
The embedding of relative distances. It's a tensor of shape [:math:`2 \\times \\text{max_relative_positions}`, `hidden_size`].
"""
if query_states is None:
qp = self.in_proj(hidden_states) # .split(self.all_head_size, dim=-1)
query_layer, key_layer, value_layer = self.transpose_for_scores(qp).chunk(3, dim=-1)
else:
def linear(w, b, x):
if b is not None:
return torch.matmul(x, w.t()) + b.t()
else:
return torch.matmul(x, w.t()) # + b.t()
ws = self.in_proj.weight.chunk(self.num_attention_heads * 3, dim=0)
qkvw = [torch.cat([ws[i * 3 + k] for i in range(self.num_attention_heads)], dim=0) for k in range(3)]
qkvb = [None] * 3
q = linear(qkvw[0], qkvb[0], query_states)
k, v = [linear(qkvw[i], qkvb[i], hidden_states) for i in range(1, 3)]
query_layer, key_layer, value_layer = [self.transpose_for_scores(x) for x in [q, k, v]]
query_layer += self.transpose_for_scores(self.q_bias[None, None, :])
> value_layer += self.transpose_for_scores(self.v_bias[None, None, :])
E RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/autograd/variable.cpp":352, please report a bug to PyTorch.
src/transformers/modeling_deberta.py:575: RuntimeError
_________________________________________________________ DebertaModelTest.test_resize_tokens_embeddings _________________________________________________________
self = <tests.test_modeling_deberta.DebertaModelTest testMethod=test_resize_tokens_embeddings>
def test_resize_tokens_embeddings(self):
(
original_config,
inputs_dict,
) = self.model_tester.prepare_config_and_inputs_for_common()
if not self.test_resize_embeddings:
return
for model_class in self.all_model_classes:
config = copy.deepcopy(original_config)
model = model_class(config)
model.to(torch_device)
if self.model_tester.is_training is False:
model.eval()
model_vocab_size = config.vocab_size
# Retrieve the embeddings and clone theme
model_embed = model.resize_token_embeddings(model_vocab_size)
cloned_embeddings = model_embed.weight.clone()
# Check that resizing the token embeddings with a larger vocab size increases the model's vocab size
model_embed = model.resize_token_embeddings(model_vocab_size + 10)
self.assertEqual(model.config.vocab_size, model_vocab_size + 10)
# Check that it actually resizes the embeddings matrix
self.assertEqual(model_embed.weight.shape[0], cloned_embeddings.shape[0] + 10)
# Check that the model can still do a forward pass successfully (every parameter should be resized)
> model(**self._prepare_for_class(inputs_dict, model_class))
tests/test_modeling_common.py:655:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:891: in forward
encoder_outputs = self.encoder(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:401: in forward
hidden_states = layer_module(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:324: in forward
attention_output = self.attention(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:257: in forward
self_output = self.self(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = DisentangledSelfAttention(
(in_proj): Linear(in_features=32, out_features=96, bias=False)
(dropout): StableDropout()
)
hidden_states = tensor([[[ 0.0000, -1.3935, 1.0241, ..., 0.4936, 0.0103, 0.0000],
[-0.9223, 0.6012, -0.7819, ..., -0.8... [ 0.0000, -0.4612, 1.3087, ..., 1.2537, 0.3499, -0.9116]]],
device='cuda:0', grad_fn=<XDropoutBackward>)
attention_mask = tensor([[[[1, 1, 0, 0, 1, 1, 0],
[1, 1, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0,..., 0, 1, 0, 1],
[0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 0, 1]]]], device='cuda:0', dtype=torch.uint8)
return_att = False, query_states = None, relative_pos = None, rel_embeddings = None
def forward(
self,
hidden_states,
attention_mask,
return_att=False,
query_states=None,
relative_pos=None,
rel_embeddings=None,
):
"""Call the module
Args:
hidden_states (:obj:`torch.FloatTensor`):
Input states to the module usally the output from previous layer, it will be the Q,K and V in `Attention(Q,K,V)`
attention_mask (:obj:`torch.ByteTensor`):
An attention mask matrix of shape [`B`, `N`, `N`] where `B` is the batch size, `N` is the maxium sequence length in which element [i,j] = `1` means the `i` th token in the input can attend to the `j` th token.
return_att (:obj:`bool`, optional):
Whether return the attention maxitrix.
query_states (:obj:`torch.FloatTensor`, optional):
The `Q` state in `Attention(Q,K,V)`.
relative_pos (:obj:`torch.LongTensor`):
The relative position encoding between the tokens in the sequence. It's of shape [`B`, `N`, `N`] with values ranging in [`-max_relative_positions`, `max_relative_positions`].
rel_embeddings (:obj:`torch.FloatTensor`):
The embedding of relative distances. It's a tensor of shape [:math:`2 \\times \\text{max_relative_positions}`, `hidden_size`].
"""
if query_states is None:
qp = self.in_proj(hidden_states) # .split(self.all_head_size, dim=-1)
query_layer, key_layer, value_layer = self.transpose_for_scores(qp).chunk(3, dim=-1)
else:
def linear(w, b, x):
if b is not None:
return torch.matmul(x, w.t()) + b.t()
else:
return torch.matmul(x, w.t()) # + b.t()
ws = self.in_proj.weight.chunk(self.num_attention_heads * 3, dim=0)
qkvw = [torch.cat([ws[i * 3 + k] for i in range(self.num_attention_heads)], dim=0) for k in range(3)]
qkvb = [None] * 3
q = linear(qkvw[0], qkvb[0], query_states)
k, v = [linear(qkvw[i], qkvb[i], hidden_states) for i in range(1, 3)]
query_layer, key_layer, value_layer = [self.transpose_for_scores(x) for x in [q, k, v]]
query_layer += self.transpose_for_scores(self.q_bias[None, None, :])
> value_layer += self.transpose_for_scores(self.v_bias[None, None, :])
E RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/autograd/variable.cpp":352, please report a bug to PyTorch.
src/transformers/modeling_deberta.py:575: RuntimeError
======================================================================== warnings summary ========================================================================
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/utils.py:30
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/utils.py:30: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
from collections import Mapping, defaultdict
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/graphql/type/directives.py:55
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.'
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/graphql/type/typemap.py:1
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/graphql/type/typemap.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
from collections import OrderedDict, Sequence, defaultdict
tests/test_modeling_deberta.py::DebertaModelTest::test_deberta_model
tests/test_modeling_deberta.py::DebertaModelTest::test_feed_forward_chunking
tests/test_modeling_deberta.py::DebertaModelTest::test_for_sequence_classification
tests/test_modeling_deberta.py::DebertaModelTest::test_resize_tokens_embeddings
/mnt/nvme1/code/huggingface/transformers-master/src/transformers/modeling_deberta.py:574: UserWarning: Output 0 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using `unsafe_` version of the function that produced this view or don't modify this view inplace. (Triggered internally at /opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/autograd/variable.cpp:480.)
query_layer += self.transpose_for_scores(self.q_bias[None, None, :])
tests/test_modeling_deberta.py::DebertaModelTest::test_deberta_model
tests/test_modeling_deberta.py::DebertaModelTest::test_feed_forward_chunking
tests/test_modeling_deberta.py::DebertaModelTest::test_for_sequence_classification
tests/test_modeling_deberta.py::DebertaModelTest::test_resize_tokens_embeddings
/mnt/nvme1/code/huggingface/transformers-master/src/transformers/modeling_deberta.py:575: UserWarning: Output 2 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using `unsafe_` version of the function that produced this view or don't modify this view inplace. (Triggered internally at /opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/autograd/variable.cpp:480.)
value_layer += self.transpose_for_scores(self.v_bias[None, None, :])
tests/test_modeling_deberta.py::DebertaModelTest::test_model_outputs_equivalence
/mnt/nvme1/code/huggingface/transformers-master/src/transformers/modeling_deberta.py:1011: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/utils/python_arg_parser.cpp:945.)
label_index = (labels >= 0).nonzero()
-- Docs: https://docs.pytest.org/en/stable/warnings.html
==================================================================== short test summary info =====================================================================
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_deberta_model - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/con...
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_feed_forward_chunking - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "...
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_for_sequence_classification - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILE...
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_resize_tokens_embeddings - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED a...
====================================================== 4 failed, 22 passed, 7 skipped, 13 warnings in 7.85s ====================================================
```
Thank you!
## Environment info
```
- `transformers` version: master
- Platform: Linux-4.15.0-118-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0.dev20201014 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7822/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.