repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 16,553 | closed | add a template to add missing tokenization test | # What does this PR do?
In this PR I propose to add a cookie cutter template for tokenization tests.
It is a first version and could be useful to propose in a good first issue to users to add the missing tests to the tokenizers of the following models:
- Flaubert #15137
- LED
- RemBert
- Splinter
and eventually for these tokenizers too (currently they just inherit from `BertTokenizer` and just re-define the attributes):
- MobileBert
- ConvBert
- Electra
- Longformer
- RetriBert
I plan to give a little more information on how to add the test to the good first issue ticket than is currently shown in the readme. But don't hesitate if you think it is better to say as much as possible in the current readme
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. I would love to have your input on this @LysandreJik , @sgugger , @patrickvonplaten or @patil-suraj :hugs:
| 04-01-2022 15:20:18 | 04-01-2022 15:20:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,552 | closed | How loss is calculated in MLM transformers training and the calculation/Intuition behind it? | I am training a MLM model using `Roberta-XLM large` model.
Here is the standard code.
```
tokenizer = tr.XLMRobertaTokenizer.from_pretrained("xlm-roberta-large",local_files_only=True)
model = tr.XLMRobertaForMaskedLM.from_pretrained("xlm-roberta-large", return_dict=True,local_files_only=True)
df=pd.read_csv("training_data_multilingual.csv")
train_df=df.message_text.tolist()
train_df=list(set(train_df))
train_df = [x for x in train_df if str(x) != 'nan']
train_encodings = tokenizer(train_df, truncation=True, padding=True, max_length=512, return_tensors="pt")
class SEDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
self.encodings = encodings
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
return item
def __len__(self):
return len(self.encodings["attention_mask"])
train_data = SEDataset(train_encodings)
# print("train data created")
training_args = tr.TrainingArguments(
output_dir='results_mlm_vocab_exp'
,logging_dir='logs_mlm_vocab_exp' # directory for storing logs
,save_strategy="epoch"
,learning_rate=2e-5
,logging_steps=6000
,overwrite_output_dir=True
,num_train_epochs=10
,per_device_train_batch_size=2
,prediction_loss_only=True
,gradient_accumulation_steps=4
,bf16=True #Ampere GPU
,optim="adamw_hf"
)
trainer = tr.Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_data
)
trainer.train()
```
I have few question related to this:
* How loss is calculated in MLM training? I see during training these logs are printed `{'loss': 1.6117, 'learning_rate': 1.751861042183623e-05, 'epoch': 2.48}`. I guess it's training loss? If so how its calculated?
* How to pass validation data inside `TrainingArguments` ? Is it same as training data?
* Is it logical to get precision, recall, F1 score for training and validation data for MLM training? If so then how to achieve it using `Trainer`?
Any reading links would also be appreciated. | 04-01-2022 15:08:29 | 04-01-2022 15:08:29 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,551 | closed | Add PLBartTokenizerFast | This PR adds fast tokenizer for PLBart. It is currently a work-in-progress.
@patil-suraj @sgugger @LysandreJik @patrickvonplaten
### To-do:
- [ ] Add missing classes convert_slow_tokenizer
- [ ] Convert and add the tokenizer files to all the model checkpoints
- [ ] Fix the tests | 04-01-2022 14:41:46 | 04-01-2022 14:41:46 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16551). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger I am actually facing an issue for the converter. The `PLBartTokenizer` accepts an argument called `language_codes` based on which the language codes for the tokenizer are decider.
The converter only takes one list of language codes.
Should I modify the method which is used in `PreTrainedTokenizerFast` to allow for adding extra arguments like `language_codes`? What is the best way of going about this according to you?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>Hey @gchhablani ! Let us know if you need any help with this :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,550 | closed | pytest throws ImportError: cannot import name 'json' from 'itsdangerous' | Running any test using pytest throws the following `ImportError`:
```python
Traceback (most recent call last):
File "/home/crocoder/anaconda3/envs/transformers_env/bin/pytest", line 8, in <module>
sys.exit(console_main())
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/_pytest/config/__init__.py", line 187, in console_main
code = main()
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/_pytest/config/__init__.py", line 145, in main
config = _prepareconfig(args, plugins)
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/_pytest/config/__init__.py", line 324, in _prepareconfig
config = pluginmanager.hook.pytest_cmdline_parse(
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall
gen.send(outcome)
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/_pytest/helpconfig.py", line 102, in pytest_cmdline_parse
config: Config = outcome.get_result()
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1016, in pytest_cmdline_parse
self.parse(args)
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1304, in parse
self._preparse(args, addopts=addopts)
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1187, in _preparse
self.pluginmanager.load_setuptools_entrypoints("pytest11")
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/pluggy/_manager.py", line 287, in load_setuptools_entrypoints
plugin = ep.load()
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/importlib/metadata.py", line 77, in load
module = import_module(match.group('module'))
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/_pytest/assertion/rewrite.py", line 168, in exec_module
exec(co, module.__dict__)
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/dash/__init__.py", line 5, in <module>
from .dash import Dash, no_update # noqa: F401,E402
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/_pytest/assertion/rewrite.py", line 168, in exec_module
exec(co, module.__dict__)
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/dash/dash.py", line 17, in <module>
import flask
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/flask/__init__.py", line 19, in <module>
from . import json
File "/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/flask/json/__init__.py", line 15, in <module>
from itsdangerous import json as _json
ImportError: cannot import name 'json' from 'itsdangerous' (/home/crocoder/anaconda3/envs/transformers_env/lib/python3.8/site-packages/itsdangerous/__init__.py)
```
### Env details
- conda: 4.10.3
- python: 3.8.13
- itsdangerous : 2.1.2
- Output of `pip list|grep pytest`:
```
pytest 7.1.1
pytest-forked 1.4.0
pytest-timeout 2.1.0
pytest-xdist 2.5.0
```
### Possible solutions
- Checked this [StackOverflow question](https://stackoverflow.com/questions/71189819/python-docker-importerror-cannot-import-name-json-from-itsdangerous). Seems like the issue is with my `pytest` version or `itsdangerous` version. | 04-01-2022 14:27:35 | 04-01-2022 14:27:35 | Uninstalling `flask` and `itsdangerous` and re-installing the `".[dev]"` environment fixed the issue.
PR which fixes this issue:
https://github.com/huggingface/transformers/pull/16387
|
transformers | 16,549 | closed | Enable reproducibility | # 🚀 Feature request
To enable consistent benchmarking using the `transformers` library, deterministic behaviour needs to be enforced. This could be a simple option in `TrainingArguments`, e.g., `enforce_reproducibility=True`. Currently seeds are set, but randomness still occurs as part of CUDA and the dataloaders.
## Motivation
I am the maintainer of a [Scandinavian benchmarking library](https://github.com/saattrupdan/ScandEval) for language models, which uses `transformers` under the hood. The benchmarking results are always slightly different, however, and this could be resolved in PyTorch as [described here](https://pytorch.org/docs/stable/notes/randomness.html). See below for the concrete changes.
## Your contribution
To ensure reproducibility, the `set_seed` function in [trainer_utils.py](https://github.com/huggingface/transformers/blob/59a9c83e40f879f5060eff99968dc688a56d0d0d/src/transformers/trainer_utils.py#L49-L63) needs to include the following:
```python
import torch
import os
# Enable PyTorch deterministic mode. This potentially requires either the environment
# variable 'CUDA_LAUNCH_BLOCKING' or 'CUBLAS_WORKSPACE_CONFIG' to be set,
# depending on the CUDA version, so we set them both here
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
os.environ['CUBLAS_WORKSPACE_CONFIG'] = ':16:8'
torch.use_deterministic_algorithms(True)
# Enable CUDNN deterministic mode
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
```
Furthermore, to enable determinism in PyTorch `DataLoader`s, the arguments `generator` and `worker_init_fn` need to be set. The `generator` is already set in the `transformers` library [here](https://github.com/huggingface/transformers/blob/9947dd077c1dd3a4e220b1846ed38f475641e21d/src/transformers/trainer.py#L589-L597), so we only need to set the `worker_init_fn`, as follows:
```python
def seed_worker(_):
worker_seed = torch.initial_seed() % 2**32
set_seed(worker_seed)
dataloader = Dataloader(..., worker_init_fn=seed_worker)
```
| 04-01-2022 14:16:53 | 04-01-2022 14:16:53 | We could definitely wrap the first in a function that can easily be called in all our example scripts. I wouldn't add the content to `set_seed`, but we can add a flag that would call this extra function in `set_seed`.
The second part should be useful all the time so is a welcome change. We should just directly set the seeds for numpy and the python random module in it however, as the torch seed are already set by PyTorch.
Would like to tackle all of this in a PR?<|||||>@saattrupdan @sgugger If you didn't start to work on this issue, I would like to tackle all of this in a PR.<|||||>@hasansalimkanmaz I have not started working on it, no, so if you've got the time to look at it then go ahead 😊
Unless @sgugger have already started on it?<|||||>I have not :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This was implemented in https://github.com/huggingface/transformers/pull/16907. |
transformers | 16,548 | closed | PretrainedModel: made `_load_pretrained_model_low_mem` static + bug fix | # What does this PR do?
This PR makes `PretrainedModel._load_pretrained_model_low_mem` a `staticmethod` since it's not returning an instance of `PretrainedModel`. I've also fixed a bug inside the method itself, the main for loop was wrong. | 04-01-2022 14:13:19 | 04-01-2022 14:13:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merging this, since `low_cpu_mem_usage` is not working on master right now and is required for to test the addition of some new models.
@stas feel free to still leave a comment about the `classmethod` change :) |
transformers | 16,547 | closed | Fix bugs in position ids in Bart | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
**Fix bugs in position ids in Bart**
- the padding_token could be added at the begging of the sentence like gpt2 and roberta
- `self.offset = self.padding_idx+1`, but add a new parameters will make it more clear
**Reference:**
position Embedding in roberta and other models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten, @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-01-2022 13:38:27 | 04-01-2022 13:38:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16547). All of your documentation changes will be reflected on that endpoint.<|||||>The main reason for the test failure is that the test program does not take into account the case of padding at the beginning of a sentence and that's what my pr deal with!<|||||>Hey @Oran-Ac,
Could you clarify a bit what you mean by:
```
Fix bugs in position ids in Bart
the padding_token could be added at the begging of the sentence like gpt2 and roberta
self.offset = self.padding_idx+1, but add a new parameters will make it more clear
```
?
What is the use case that currently does not work? Can you give an example of this? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,546 | closed | Remove MBart subclass of XLMRoberta in tokenzier docs | # What does this PR do?
This PR removes the line mentioning MBartTokenizer is a subclass of XLMRobertaTokenizer in the fast tokenizer.
@sgugger @patil-suraj | 04-01-2022 13:33:23 | 04-01-2022 13:33:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,545 | closed | Fix bugs in position ids in Bart | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
**Fix bugs in position ids in Bart**
- the padding_token could be added at the begging of the sentence like gpt2 and roberta
- `off_set == padding_idx`, but add a new parameters will make it more clear
**Reference:**
position Embedding in roberta and other models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten, @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-01-2022 13:31:50 | 04-01-2022 13:31:50 | |
transformers | 16,544 | closed | Add VisualBert type hints | This PR adds type hints for the VisualBert model in PyTorch.
@Rocketknight1 | 04-01-2022 13:01:28 | 04-01-2022 13:01:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16544). All of your documentation changes will be reflected on that endpoint. |
transformers | 16,543 | closed | Add TF implementation of `XGLMModel` | # What does this PR do?
Fixes #16422
This PR adds TF implementation of `XGLMModel`.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@gante @patil-suraj | 04-01-2022 11:43:49 | 04-01-2022 11:43:49 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16543). All of your documentation changes will be reflected on that endpoint.<|||||>Hello @gante, I would like to kindly ask for help here, as I'm not fully sure, where the problem might be :/ Currently, it looks like all the weights are loaded properly by a TF model, but it seems there still must be a glitch :| (I'm still learning TF so I might have made a silly mistake O:] ) Thanks a lot in advance!
<|||||>> Hello @gante, I would like to kindly ask for help here, as I'm not fully sure, where the problem might be :/ Currently, it looks like all the weights are loaded properly by a TF model, but it seems there still must be a glitch :| (I'm still learning TF so I might have made a silly mistake O:] ) Thanks a lot in advance!
No worries :) Are you trying to run the equivalence test as it is, or are you running another script?<|||||>> > Hello @gante, I would like to kindly ask for help here, as I'm not fully sure, where the problem might be :/ Currently, it looks like all the weights are loaded properly by a TF model, but it seems there still must be a glitch :| (I'm still learning TF so I might have made a silly mistake O:] ) Thanks a lot in advance!
>
> No worries :) Are you trying to run the equivalence test as it is, or are you running another script?
I've tried to run the equivalence tests as they are.<|||||>@stancld a little forewarning: I'm having issues in need of debugging in one of my own PRs as well and I'm off tomorrow, so I don't expect to have an answer before next Tuesday :) <|||||>Hey @stancld 👋 Apologies for my delay :)
I've added two sets of small changes, to hopefully help with test issues:
1. removed `TFCoreModelTesterMixin` from the tests -- it is a slow test suite reserved for key models like `bert` or `gpt2`;
2. added an explicit cast inside a layer -- on my end, I got an exception related to a variable being in `half-precision. On CI, I see that there is no error, only a failed numerical test.
I see that it didn't solve the issue here, even though it is passing on my local env. Digging deeper 🔍 <|||||><img width="1512" alt="Screenshot 2022-04-24 at 15 14 15" src="https://user-images.githubusercontent.com/12240844/164980731-cc0fb174-77f7-4732-b359-d0ac60994c0f.png">
(passing in my env)<|||||>I've spun up a machine with a GPU, it also passes if executed on a GPU (a K80). @stancld, does the test fail in your local environment, after the changes?
- If it fails and you have a GPU, check if passes on CPU (e.g. `CUDA_VISIBLE_DEVICES="" RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 py.test -vv tests/xglm/test_modeling_xglm.py::XGLMModelTest::test_pt_tf_model_equivalence`)
- If it passes, it might be a CI env problem 🤔 <|||||>> * RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 py.test -vv tests/xglm/test_modeling_xglm.py::XGLMModelTest::test_pt_tf_model_equivalence
Hi @gante, thanks a lot for the update. I can confirm equivalence tests pass fine on CPU.<|||||>It seems equivalence tests are passing after rebasing to the 'main' branch. There are some remaining three failing tests, however, the outcome is a bit random as they don't always fail 🤔
```
FAILED tests/xglm/test_modeling_tf_xglm.py::TFXGLMModelTest::test_xglm_model_att_mask_past - tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected 'tf.Tensor(False, shape=(), dtype=bool)' to be true. Summarized data: b''
FAILED tests/xglm/test_modeling_tf_xglm.py::TFXGLMModelTest::test_xglm_model_past - tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected 'tf.Tensor(False, shape=(), dtype=bool)' to be true. Summarized data: b''
FAILED tests/xglm/test_modeling_tf_xglm.py::TFXGLMModelTest::test_xglm_model_past_large_inputs - tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected 'tf.Tensor(False, shape=(), dtype=bool)' to be true. Summarized dat...
```<|||||>> I can confirm equivalence tests pass fine on CPU.
@stancld have you tried them on GPU? If so, what was the outcome, and what GPU model do you have? I'm trying to figure out why the tests fail in some conditions :D<|||||>> > I can confirm equivalence tests pass fine on CPU.
>
> @stancld have you tried them on GPU? If so, what was the outcome, and what GPU model do you have? I'm trying to figure out why the tests fail in some conditions :D
Haven't tried yet. I'll run tests on GPU later today and let you know what it looks like. :]<|||||>@stancld after your most recent push, the equivalence tests passed on CI 🤔 Well, one less problem to consider. <|||||>Hi @gante, I cleaned a test file a little bit to match the other test files. I'm gonna add some integration tests later tonight and then I hope the PR will be ready for the final review :]<|||||>Hi @gante, actually not sure which/if any integration tests should be added. Can I ask you, please, to have a look if there's anything missing now?
Also, I've uploaded the TF checkpoint to the HF hub and it should work just fine. You can find a checkpoint [here](https://huggingface.co/Stancld/xglm-564M).<|||||>Ah -- you should be able to get rid of the CI errors by rebasing with `main` 👍 <|||||>Hello @gante, thanks a lot for your feedback and tips! :] I added an integration test on left padding and found out it actually doesn't work properly and generates some gibberish. Try to dig deep to solve the problem :]<|||||>> Hey @stancld 👋 First of all, apologies for my delay in the review 🙈 I think the model is almost ready to go! I've added a few minor comments, including an action point for me (to add the TF weights in the facebook repo)
>
> Regarding integration tests: a potentially missing test is whether the model can make proper use of left padding (for instance, TFGPT2 was not doing it until recently). You can see an example test [here](https://github.com/huggingface/transformers/blob/main/tests/models/gpt2/test_modeling_tf_gpt2.py#L539)
>
> As always, thank you for the excellent contribution <3
Hello @gante, is there by chance any MR fixing left-padding for GPT2? O:] <|||||>> Hello @gante, is there by chance any MR fixing left-padding for GPT2? O:]
Check `prepare_inputs_for_generation` in [this](https://github.com/huggingface/transformers/pull/17426) PR (or `main`'s TF GPT2), I suspect TF XGLM is missing `position_ids`. `position_ids` holds the position of the token after accounting for left-padding, allowing the model to get the correct position embedding.<|||||>Weights PR open here: https://huggingface.co/facebook/xglm-564M/discussions/1
(one detail needs double-checking, other than that it should be good to go)<|||||>Hello @gante, sorry for a bit of delay on this PR. We've been just finishing #16792. I'll try to fix all the issues here afterwards :]<|||||>Weights merged -- for the 564M model, the others I will merge after this PR gets merged
(No worries about delays, take your time 👍 )<|||||>Hello @gante,
I have updated tests to use TF checkpoints and it looks like there must be a glitch there as some new failing slow tests occurred (`test_batch_generation`, `test_xglm_sample`).
Besides, there's a persisting problem with `test_lm_generate_xglm_left_padding`. And also some peculiar issue with `test_lm_generate_xglm` which raises `ValueError: not enough values to unpack (expected 2, got 1)` inside greedy search.
See all errors below.
```
FAILED tests/models/xglm/test_modeling_tf_xglm.py::TFXGLMModelTest::test_batch_generation - AssertionError: Lists differ: ['Hel[17 chars]ittle bit of a shy one, but he is very friendl[65 chars]ngs'] != ['Hel[17 chars]ittleተኛውowany Gami...
FAILED tests/models/xglm/test_modeling_tf_xglm.py::TFXGLMModelLanguageGenerationTest::test_lm_generate_xglm - ValueError: not enough values to unpack (expected 2, got 1)
FAILED tests/models/xglm/test_modeling_tf_xglm.py::TFXGLMModelLanguageGenerationTest::test_lm_generate_xglm_left_padding - AssertionError: 'Toda[19 chars]y andიას Nomເຮົາ trẻ napríklad სამართლ Bouleva[24 chars]іnye' != 'Toda[19 chars]y ...
FAILED tests/models/xglm/test_modeling_tf_xglm.py::TFXGLMModelLanguageGenerationTest::test_xglm_sample - AssertionError: 'Toda[14 chars]y and벗5huhuxhuman яркиیارە daerah opisujelanıb[27 chars]ågan' != 'Toda[14 chars]y and the sun is s...
=========================================================================================== 4 failed, 31 passed, 1 skipped, 81 warnings in 58.29s ============================================================================================
```
Try to investigate these problems more deeply this week :]<|||||>Ola @gante, I'm pleased to announce that all failing slow tests have been fixed and work fine now (including left-padding generation).
However, there's a problem with a `tf_model.h5` in the `facebook/xglm-574M`, as the model using those weights produces gibberish output. I can confirm, nonetheless, that everything works perfectly with my TF weights available in [`Stancld/xglm-564M](https://huggingface.co/Stancld/xglm-564M).
cc: @patil-suraj @Rocketknight1 <|||||>> However, there's a problem with a tf_model.h5 in the facebook/xglm-574M, as the model using those weights produces gibberish output. I can confirm, nonetheless, that everything works perfectly with my TF weights available in [`Stancld/xglm-564M](https://huggingface.co/Stancld/xglm-564M).
😬 I'll check what went wrong with my conversion. You converted using the `from_pretrained(from_pt=True)`, then stored the weights with `.save_weights()`, correct?<|||||>> > However, there's a problem with a tf_model.h5 in the facebook/xglm-574M, as the model using those weights produces gibberish output. I can confirm, nonetheless, that everything works perfectly with my TF weights available in [`Stancld/xglm-564M](https://huggingface.co/Stancld/xglm-564M).
>
> 😬 I'll check what went wrong with my conversion. You converted using the `from_pretrained(from_pt=True)`, then stored the weights with `.save_weights()`, correct?
I used `.save_pretrained(...)` instead of `.save_weights().`<|||||>@stancld the conversion I pushed was built with the first version of the `pt-to-tf` CLI, which was not converting the model head. I've opened a PR with the updated weights, which also contain the model head 🙌
There is still one minor problem, but it's outside the scope of this PR -- the PT weights are stored in `float16`, and our TF tools do not have support for half-precision (yet!).
weights PR 👉 https://huggingface.co/facebook/xglm-564M/discussions/2<|||||>They were merged -- can you check whether the weights work? :)<|||||>> They were merged -- can you check whether the weights work? :)
Yes, can confirm all tests (including slow ones) work!<|||||>Awesome, so all that's left is a full review.
@Rocketknight1 @patil-suraj <|||||>@gante Finally got there, rebased to the current main and add the attribute :]<|||||>Needs to be rebased with `main` again, the problems in CI were already fixed there :)
Going to ask @patil-suraj for a review, then we can merge it (and convert the remaining models!)<|||||>@gante @patrickvonplaten One failing test fixed :]<|||||>Splendid! Thank you for all the work, @stancld 🧡
I'm going to merge the PR and push the weights for the other XGLM architectures 🔥 |
transformers | 16,542 | closed | Issue with aggregation_strategy="max" in NER pipeline | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Python version: Python 3.7.13
- PyTorch version (GPU?): 1.10.0+cu111
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help
@LysandreJik
@Narsil
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Good day,
Up untill yesterday my NER model was working fine using the pipeline and the aggregation_strategy=“max”. Now, I get the error message: TypeError: Can’t convert [’ In’] to PyString which is the first word of my sentence.
I noticed that using “simple” it works fine, but for my system “max” was the best working one.
It seems to me that now the tokenizer is putting words in a list whereas before it didn’t.
Did something change in the use of the tokenizer or aggregation_stategy?
I can’t figure out why this is happening.
This is my model and code:
`tokenizer_mbert_mul = AutoTokenizer.from_pretrained("StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES")
model_mbert_mul = AutoModelForTokenClassification.from_pretrained("StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES")
ner = pipeline('ner', aggregation_strategy="max", model=model_mbert_mul, tokenizer=tokenizer_mbert_mul)`
## To reproduce
This a sentence of my data:
"“In biology, a gene (from genos (Greek) meaning generation or birth or gender) is a basic unit of heredity and a sequence of nucleotides in DNA that encodes the synthesis of a gene product”
This is the error message:
`/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py in convert_tokens_to_string(self, tokens)
533
534 def convert_tokens_to_string(self, tokens: List[str]) -> str:
--> 535 return self.backend_tokenizer.decoder.decode(tokens)
536
537 def _decode(
TypeError: Can't convert [' In'] to PyString`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
It was producing NER with the correct aggregation_strategy, however now it does not. I downgraded to the transformers 4.16 version and it works just fine.
| 04-01-2022 11:24:48 | 04-01-2022 11:24:48 | Might be worth trying to uninstall and reinstall `tokenizers`. Tokenizers 0.12 was yanked and a new version is coming. See #16520, #16525, #16537, #16540 for more. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,541 | closed | Is gradient checkpointing really needed when fine-tuning LED on 16384 tokens? | Hi,
The [documentation of LED](https://huggingface.co/docs/transformers/model_doc/led) states:
> To fine-tune LED on all 16384, it is necessary to enable gradient checkpointing by executing model.gradient_checkpointing_enable().
Moreover, @patrickvonplaten in [this notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) initializes the model using `AutoModelForSeq2SeqLM.from_pretrained("allenai/led-base-16384", gradient_checkpointing=True, use_cache=False)`.
I have a 24GB GPU, but the base model fits perfectly without the need of activating gradient checkpointing, which instead slows down performance by a large margin. I am using a batch size = 1 with 16 steps of gradient accumulation.
So my question is: is my approach (just gradient accumulation) enough for fine-tuning LED, or it is _really_ necessary to use gradient checkpointing (for reasons I may have missed)? If so, what are the implications of using gradient checkpointing in this case, besides saving memory?
I was also wondering what exactly does the `use_cache=False` parameter in the `from_pretrained` method.
Thank you very much | 04-01-2022 11:17:46 | 04-01-2022 11:17:46 | Hey @caesar-one,
Sorry, the documentation of LED might not have been super clear in this case. It should probably be rewritten to:
```
Gradient check-pointing is only needed if training leads to out-of-memory (OOM) errors
```
Would you like to improve the docs here maybe? :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sure :) |
transformers | 16,540 | closed | add a test checking the format of `convert_tokens_to_string`'s output | # What does this PR do?
For context, the tokenizers 0.12.0 release was breaking in the sense that it changed the output format of the `decode` method of the `backend_tokenizer` 's `decoder` which was used in `convert_tokens_to_string` (cf #16537, #16520, #16525). The 0.12.0 version has since been yanked but I think it would be beneficial to add a common test to the transformers tokenizers to verify that the `convert_tokens_to_string` method output format is respected.
This PR therefore proposes adding such tests with some overridden tests for tokenizers with constraints on possible tokens or with a slightly different output format.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Would love to have your opinion @LysandreJik , @sgugger and/or @Narsil
| 04-01-2022 10:51:20 | 04-01-2022 10:51:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the review @Narsil :slightly_smiling_face:
> Why is tokenization_common test not enough ? Don't the other tests inherit the same base/meta class ?
They also inherit from `TokenizerTesterMixin`, but unfortunately there are 1) tokenizers that have constraints on acceptable tokens - this is the case of `ByT5` and `perceiver` - and 2) tokenizers whose `convert_ids_to_string` method returns not a string but a dictionary - this is the case of `wav2vec2` and `wav2vec2_phoneme`.<|||||>Thanks for the explanation, definitely not obvious !<|||||>That a good point: I've added a comment for each overridden test in my last commit to reflect that last explanation :slightly_smiling_face: |
transformers | 16,539 | closed | Pin tokenizers version <0.13 | Adds an upper bound to tokenizers versions so that `tokenizers` development may continue unhindered. | 04-01-2022 10:30:46 | 04-01-2022 10:30:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks ! |
transformers | 16,538 | closed | ONNX causal-lm-with-past conversion: attention_mask dtype changed | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyTorch version (GPU?): 1.10.0+cu111 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: Fasle
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Not sure who should I tag for ONNX related issues @mfuntowicz ? tagging @patil-suraj as contact for GPT models
## Information
Model I am using : GPT2, GPTNeo
The problem arises when using the official conversion script (see below).
The tasks I am working on is pre-trained model conversion to ONNX
## To reproduce
Steps to reproduce the behavior:
1. Convert GPT model to ONNX for `causal-lm-with-past` using
```bash
python -m transformers.onnx --model=gpt2 --feature=causal-lm-with-past --atol=5e-4 ./onnx/
```
2. Load the model and check expected input types
```py
import onnx
model = onnx.load("onnx/model.onnx")
inp = model.graph.input
print(f"{inp[0].name}: element type {inp[0].type.tensor_type.elem_type}")
print(f"{inp[-1].name}: element type {inp[-1].type.tensor_type.elem_type}")
```
```
input_ids: element type 7
attention_mask: element type 1
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The entire process can be reproduced using this colab: https://colab.research.google.com/gist/arampacha/2831d9f6812d2eb4d6d11dc13f76ca49/hf-onnx-attn-mask.ipynb
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The `attention_mask` input should be of integer type (element type 7) not float (element type 1). Because of this `attention_mask` returned by tokenizer should be converted to `float` for inference.
Looks like the unexpected conversion happens because `torch.ones` returns `torch.float32` by default. See this for example https://github.com/huggingface/transformers/blob/9de70f213eb234522095cc9af7b2fac53afc2d87/src/transformers/models/gpt2/configuration_gpt2.py#L266
The problem can be fixed by propagating `attention_mask` dtype:
```py
mask_dtype = ordered_inputs["attention_mask"].dtype
ordered_inputs["attention_mask"] = torch.cat(
[ordered_inputs["attention_mask"], torch.ones(batch, past_key_values_length, dtype=mask_dtype)], dim=1
)
```
I can submit PR with a fix but more model classes can be impacted. | 04-01-2022 10:01:35 | 04-01-2022 10:01:35 | Thank for this excellent bug report and reproducible Colab @arampacha ! I agree with your analysis and think your solution to propagate the `mask_dtype` to `torch.ones()` should fix the issue.
Before opening a PR, I'd like to get @michaelbenayoun's input as he may have had a specific reason to use the default float values when implementing this <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @arampacha if you're still happy to open a PR with this change, I think that's the best way to proceed on this issue :)<|||||>Ok, cool. I'll go on with PR |
transformers | 16,537 | closed | Making `transformers` work on `0.12`. | # What does this PR do?
tokenizers `0.12` changed the way `decoder.decode(` works.
Instead of returning directly a string, it returns a list of strings (the "decoded" parts), which enables the decoders to be chained (and hence customized more easily).
The fix works by simply joining those parts for versions >= 0.12.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #https://github.com/huggingface/transformers/issues/16520
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 04-01-2022 07:41:20 | 04-01-2022 07:41:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you very much for the proposed fix.
Before reviewing this PR, I would like to see if it is possible to keep the new feature in `tokenizers==0.12.0` that allows to chain decoders but to add the conversion to string format at the end. <|||||>Summarizing an oral discussion around our options:
TL;DR
Ultimately the balance between `3/` and `1/` should be the biggest factors in the decision between :
**reverting the change** and preventing us from using the capability which is sometimes needed .
**Using this PR's change** and making a forward incompatibilty: `transformers<=4.17` incompatible with `tokenizers>=0.12`
1/ The change in `tokenizers` is a good one. On several occasions (last in date CLIP: https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/tokenization_clip_fast.py#L111) the inability to chain decoders had to be worked around, which was not a great dev UX in `transformers`. It was also a limitation for Bigscience tokenizer (not the latest one, but the one before) and the reason for the change.
2/ The `"".join(...decoders.decode(tokens))` is a bit clunky and not super self evident.
This is how the coders have to operate now, so using them in isolation should work that way in order to make the composition understandable. There are already discussions around `convert_tokens_to_string` to make it private later, since it causes some issues. Within `tokenizers` itself, there's no way to access `tokens` directly anyway, so users shouldn't have to `.join` in the first place.
A potential less clunky way would be to add `Tokenizer.decode_tokens(tokens)` within `tokenizers` to prevent the join in `transformers` which is indeed clunky. The only issue is that `convert_tokens_to_string` is already causing issues (mostly around the lines like why is `decode` not showing what I think it should) with discussions already going on about making it private at least. Enabling such a function in `tokenizers` might open the same discussions over there. Definitely not a showstopper, but something to think about.
The promoted way `Tokenizers.decode(ids) -> str` remains unchanged for `tokenizers` and so far raised less questions.
3/ Forward compatibility.
The main caveat to this proposed change, is that earlier versions of `transformers` will contain the bug with the new versions of `tokenizers`. Reverting is the only reasonable solution to fix that (but it's also loosing the composition options we need for the decoders)
4/ Use of `convert_tokens_to_string` in `TokenClassification`.
It's a legacy thing, and is not changed to not break BC, but causes issues on its own: https://github.com/huggingface/transformers/issues/15785#issuecomment-1049040678
Using `offsets` instead of `decode` would help in that situation (we can do it in a non breaking matter by adding a new key, and keeping the old one).
5/ Tokenizer tests
The `transformers` tokenization tests totally skip that function, which lead to not seeing that function being broken. We're going to update that change to include at least a type test for that function
<|||||>Would it be possible to consider a deprecation cycle for the tokenizer change, for example with an opt-in flag for the new behavior? Doing this would both keep all previous versions working with 0.12.0, while providing support for the new behavior.
This would allow us to prepare for the breaking change and have at least a few versions that support this before dropping support of the current behavior.<|||||>Ok `3/` forward compatibility is too important to break.
Will revert in `0.12.1` and find cleaner solution for `0.12.2` without breaking things in that manner. |
transformers | 16,536 | closed | call on_train_end when optuna trial is pruned | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16535
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-01-2022 07:39:25 | 04-01-2022 07:39:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,535 | closed | Tensorboard logger logs to same directory when trial is pruned using hp_search and optuna | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.3
- Platform: Linux
- Python version: 3.8.8
### Who can help
@sgugger
## Information
I am running a hyperparameter search on various models using optuna on a modified GLUE task. Whenever optuna prunes a trial, the following trial is logged into the same logging directory and therefore doesn't show up in Tensorboard. As far as I can tell, the immediate easy fix would be to call the `on_train_end` callback when the trial is to be pruned. This would reset the tensorboard logger and has worked for me in testing. I'm not sure if it has any unwanted side effects, though. I've opened a Pull Request fixing the issue.
| 04-01-2022 07:36:48 | 04-01-2022 07:36:48 | |
transformers | 16,534 | closed | update vilt model: remove index param in torch.meshgrid | # What does this PR do?
remove index argument in torch.meshgrid in ViLT model which would lead to
`Exception has occurred: TypeError
meshgrid() got an unexpected keyword argument 'indexing'`
| 04-01-2022 07:36:25 | 04-01-2022 07:36:25 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16534). All of your documentation changes will be reflected on that endpoint.<|||||>Hi,
Thanks for your PR. However, reading the [documentation of torch.meshgrid](https://pytorch.org/docs/stable/generated/torch.meshgrid.html), the PyTorch team is planning to change the default behaviour in the future to indexing="xy" to match the behaviour of Numpy. So I don't think we can merge this PR.
Instead, we could add a warning, telling users to have PyTorch 1.10 installed or higher, similar to #16756.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,533 | closed | GPTJ6-B python process seem to be stuck forever at ```from_pretrained``` method | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.14.1
- Platform: Linux-5.4.0-56-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.10.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes, Nvidia A100, 3 * 40GB
- Using distributed or parallel set-up in script?: Yes
### Who can help
- GPT-Neo, GPT-J, CLIP: @patil-suraj
Models:
- GPT-J with DeepSpeed
- - Deepspeed: @stas00
Library:
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
Model I am using (Bert, XLNet ...): gpt-j-6B
The problem arises when using:
[* ] my own modified scripts: (give details below)
[gpt-neo-fine-tuning-example/gpt_j_deepspeed.py at main · dredwardhyde/gpt-neo-fine-tuning-example · GitHub](https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/blob/main/gpt_j_deepspeed.py)
The tasks I am working on is:
[ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Allocated 3 40GB A100 GPU, 48 CPU Cores, 128 GB RAM, running on Kubeflow notebook image, pip freeze environment is attached
2. Line at the dump is getting produced
```
model = AutoModelForCausalLM.from_pretrained("./data-gptj-vol/models/",local_files_only=True,
torch_dtype=torch.float32).cuda()
```
3. ds_config_gpt_json
```
{
"train_batch_size":21,
"fp16":{
"enabled":true,
"min_loss_scale":1,
"opt_level":"O3"
},
"zero_optimization":{
"stage":3,
"offload_param":{
"device":"cpu"
},
"offload_optimizer":{
"device":"cpu"
},
"allgather_partitions":true,
"allgather_bucket_size":5e8,
"contiguous_gradients":true
},
"optimizer":{
"type":"AdamW",
"params":{
"lr":5e-05,
"betas":[
0.9,
0.999
],
"eps":1e-08
}
},
"scheduler":{
"type":"WarmupLR",
"params":{
"warmup_min_lr":0,
"warmup_max_lr":5e-05,
"warmup_num_steps":100
}
}
}
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
import faulhandler
faulthandler.dump_traceback_later(20, repeat=True)
```
**gives following trace** it seems it is stuck at loading the model
```
Thread 0x00007f2dbb2c4740 (most recent call first):
File "/opt/conda/lib/python3.8/site-packages/torch/nn/init.py", line 395 in kaiming_uniform_
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 96 in reset_parameters
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 90 in __init__
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 254 in wrapper
File "/opt/conda/lib/python3.8/site-packages/transformers/models/gptj/modeling_gptj.py", line 97 in __init__
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 254 in wrapper
File "/opt/conda/lib/python3.8/site-packages/transformers/models/gptj/modeling_gptj.py", line 266 in __init__
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 254 in wrapper
File "/opt/conda/lib/python3.8/site-packages/transformers/models/gptj/modeling_gptj.py", line 450 in <listcomp>
File "/opt/conda/lib/python3.8/site-packages/transformers/models/gptj/modeling_gptj.py", line 450 in __init__
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 254 in wrapper
File "/opt/conda/lib/python3.8/site-packages/transformers/models/gptj/modeling_gptj.py", line 688 in __init__
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 254 in wrapper
File "/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1411 in from_pretrained
File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 436 in from_pretrained
File "gptj_train.py", line 19 in <module>
```
## Expected behavior
it should run :). However the processes seem to be stuck for minutes at ```from_pretrained``` method as evident from faulttraces.
Is there anything obvious I am missing, please let us know if there are any clue:
It does not seem to proceed after few minutes as well [10-15 minutes]
CPU utilization goes up from top and GPU utilization does not seem to go up at all when the ```from_pretrained``` method is executing.
| 04-01-2022 05:58:34 | 04-01-2022 05:58:34 | Interesting. I have never seen this one before.
1. I don't suppose it fails when you try it alone as the hanging isn't inside deepspeed but pytorch according to your trace:
```
python -c "import AutoModelForCausalLM; AutoModelForCausalLM.from_pretrained("./data-gptj-vol/models/",local_files_only=True, torch_dtype=torch.float32)"
```
2. Why are you putting the model on `cuda()` btw? Deepspeed takes care of that for you automatically and in the correct way - we don't want the whole model on each gpu, which is what your current code does. Could you please try to remove `.cuda()` and try again? On the other hand the `.cuda` call is after `from_pretrained` so the issue is before that.
edit: I can see now that the code you linked to wasn't designed for distributed training, but you want 3 gpus. so no `.cuda()` please.<|||||>Not sure where you copied this code from https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/blob/main/gpt_j_deepspeed.py this is definitely not the way to run a distributed application - the code is using a hack to run the code from a notebook or just `python` on a single gpu (not `torch.distributed`).
if you want distributed training remove all those env vars that are a hack to emulate `torch.distributed`:
```
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '9994'
os.environ['RANK'] = "0"
os.environ['LOCAL_RANK'] = "0"
os.environ['WORLD_SIZE'] = "1"
os.environ["TOKENIZERS_PARALLELISM"] = "false"
```
and run with a proper command:
```
python -m torch.distributed.run --nproc_per_node=3 gpt_j_deepspeed.py
```
3 gpus, right? adjust above if needed
Full docs are here: https://pytorch.org/docs/stable/elastic/run.html
----
But actually have you tried the original example first and validating it worked on your setup before applying your changes to it?
As I can't see your changes it's hard to tell if the issue is in your code or the environment or the original code.
And of course show us how you launch the script.
<|||||>Hi @stas00 ,
Thanks for replying. removing ```cuda()``` resolves the original issue I reported
I also found another issue on how Deepspeed containers are built, I built a container on Intel machine and deployed on A100 AMD, it was giving issue. Just leaving it here, for anyone who may be struggling. It is really a documentation issue not necessarily a bug [issue : 1886#](https://github.com/microsoft/DeepSpeed/issues/1886)
Thanks for replying once again.
<|||||>Great to hear that it solved your problem, @kd303
|
transformers | 16,532 | closed | Regression: ONNX export fails on Pytorch ToT (NVIDIA 22.03 pytorch container) | ## Environment info
- `transformers` version: 4.17.0
- Platform: Linux-5.11.0-37-generic-x86_64-with-glibc2.10
- Python version: 3.8.12
- PyTorch version (GPU?): 1.12.0a0+2c916ef (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
@LysandreJik
Models:
- ALBERT, BERT
## Information
Model I am using (Bert, XLNet ...):
ALBERT, BERT
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Install ToT NeMo (http://github.com/NVIDIA/NeMo)
2. Build NeMo container with Dockerfile
3. In the container, uncomment 2 HG tests :
```
--- a/tests/collections/nlp/test_huggingface.py
+++ b/tests/collections/nlp/test_huggingface.py
@@ -49,8 +49,7 @@ class TestHuggingFace:
self.omega_conf.language_model.pretrained_model_name = 'bert-base-uncased'
model = nemo_nlp.modules.get_lm_model(cfg=self.omega_conf)
assert isinstance(model, nemo_nlp.modules.BertEncoder)
- # TODO: Fix
- # do_export(model, "bert-base-uncased")
+ do_export(model, "bert-base-uncased")
@pytest.mark.with_downloads()
@pytest.mark.unit
@@ -74,8 +73,7 @@ class TestHuggingFace:
self.omega_conf.language_model.pretrained_model_name = 'albert-base-v1'
model = nemo_nlp.modules.get_lm_model(cfg=self.omega_conf)
assert isinstance(model, nemo_nlp.modules.AlbertEncoder)
- # TODO: fix
- # do_export(model, "albert-base-v1")
+ do_export(model, "albert-base-v1")
```
Run pytest ./tests/collections/nlp/test_huggingface.py --with_downloads
```
Observe the result:
> params_dict = torch._C._jit_pass_onnx_deduplicate_initializers(graph, params_dict,
training == TrainingMode.TRAINING)
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument other in method wrapper__equal)
```
.....
================================== short test summary info ===================================
FAILED tests/collections/nlp/test_huggingface.py::TestHuggingFace::test_get_pretrained_bert_model
FAILED tests/collections/nlp/test_huggingface.py::TestHuggingFace::test_get_pretrained_albert_model
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
No error. Same code works fine in previous (22.02) NVIDIA pytorch container.
<!-- A clear and concise description of what you would expect to happen. -->
| 04-01-2022 05:56:42 | 04-01-2022 05:56:42 | I was able to track one of similar new ONNX errors in NeMo code down and fixed it with the diff below.
It appears exporter gets confused about device placement of inline constants (?) as well as class attributes that are scalars, not proper Variables.
In the sample from NeMo code below both literal 0 and self.max_token_duration had to be moved to buffers to be exported properly. I am quite sure something like that has to be fixed in HF code to work around new Pytorch export issues:
```
- self.max_token_duration = max_token_duration
+ self.register_buffer('max_token_duration', torch.tensor(max_token_duration))
+ self.register_buffer('min_token_duration', torch.tensor(0.0))
...
- durs_predicted = torch.clamp(torch.exp(log_durs_predicted) - 1, 0, self.max_token_duration)
+ durs_predicted = torch.clamp(
+ torch.exp(log_durs_predicted) - 1.0, self.min_token_duration, self.max_token_duration
+ )
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Please never mind - it was pytorch exporter issue and is now fixed. |
transformers | 16,531 | closed | Fixed a typo in seq2seq_trainer.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-01-2022 03:33:20 | 04-01-2022 03:33:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,530 | closed | initialize the default rank set on TrainerState | # What does this PR do?
This PR fixes this error I'm observing on one azureml experiment:
```python
[stderr] File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 508, in __init__
[stderr] self.control = self.callback_handler.on_init_end(self.args, self.state, self.control)
[stderr] File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_callback.py", line 343, in on_init_end
[stderr] return self.call_event("on_init_end", args, state, control)
[stderr] File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_callback.py", line 388, in call_event
[stderr] result = getattr(callback, event)(
[stderr] File "/opt/conda/lib/python3.8/site-packages/transformers/integrations.py", line 750, in on_init_end
[stderr] self.azureml_run = Run.get_context()
[stderr] File "/opt/conda/lib/python3.8/site-packages/azureml/core/run.py", line 382, in get_context
[stderr] return _SubmittedRun._get_instance(experiment, run_id, **kwargs)
[stderr] File "/opt/conda/lib/python3.8/site-packages/azureml/core/run.py", line 2307, in _get_instance
[stderr] run = _SubmittedRun(experiment, run_id, **kwargs)
[stderr] File "/opt/conda/lib/python3.8/site-packages/azureml/core/run.py", line 2312, in __init__
[stderr] super(_SubmittedRun, self).__init__(*args, **kwargs)
[stderr] File "/opt/conda/lib/python3.8/site-packages/azureml/core/run.py", line 173, in __init__
[stderr] super(Run, self).__init__(experiment, run_id, outputs=outputs, **kwargs)
[stderr] File "/opt/conda/lib/python3.8/site-packages/azureml/_run_impl/run_base.py", line 85, in __init__
[stderr] self._client = RunHistoryFacade(self._experiment, self._run_id, RUN_ORIGIN, run_dto=_run_dto,
[stderr] File "/opt/conda/lib/python3.8/site-packages/azureml/_run_impl/run_history_facade.py", line 96, in __init__
[stderr] self.run_dto = run_dto if run_dto is not None else self.run.get_run()
[stderr] File "/opt/conda/lib/python3.8/site-packages/azureml/_restclient/run_client.py", line 78, in get_run
[stderr] return super(RunClient, self).get_run(self._run_id, **kwargs)
[stderr] File "/opt/conda/lib/python3.8/site-packages/azureml/_restclient/experiment_client.py", line 126, in get_run
[stderr] return self._execute_with_experimentid_arguments(self._client.run.get_by_exp_id,
[stderr] File "/opt/conda/lib/python3.8/site-packages/azureml/_restclient/experiment_client.py", line 265, in _execute_with_experimentid_arguments
[stderr] return self._execute_with_arguments(func,
[stderr] File "/opt/conda/lib/python3.8/site-packages/azureml/_restclient/clientbase.py", line 591, in _execute_with_arguments
[stderr] raise ServiceException(e)
[stderr]azureml._restclient.exceptions.ServiceException: ServiceException:
[stderr] Code: 401
[stderr] Message: Operation returned an invalid status code 'Unauthorized'
[stderr] Details:
[stderr]
[stderr] Headers: {
[stderr] "Date": "Thu, 31 Mar 2022 20:22:52 GMT",
[stderr] "Content-Length": "0",
[stderr] "Connection": "keep-alive",
[stderr] "WWW-Authenticate": "Bearer authorization_uri=\"https://login.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47\", error=\"invalid_token\", error_description=\"The authentication failed because of missing 'Authorization' header.\"",
[stderr] "Request-Context": "appId=cid-v1:2d2e8e63-272e-4b3c-8598-4ee570a0e70d",
[stderr] "x-ms-response-type": "standard",
[stderr] "Strict-Transport-Security": "max-age=15724800; includeSubDomains; preload",
[stderr] "X-Content-Type-Options": "nosniff",
[stderr] "x-request-time": "0.018"
[stderr] }
[stderr] InnerException: {
[stderr] "additional_properties": {},
[stderr] "error": null,
[stderr] "correlation": null,
[stderr] "environment": null,
[stderr] "location": null,
[stderr] "time": null,
[stderr] "component_name": null
[stderr]}
```
The error appears because `self.callback_handler.on_init_end(self.args, self.state, self.control)` is called without the `self.state` properly initialized
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-31-2022 23:59:17 | 03-31-2022 23:59:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||># Who can review?
Anyone who worked on PR #8062, particularly @sgugger who also implemented callbacks with #7596 |
transformers | 16,529 | closed | Type hints added to OpenAIGPT | @Rocketknight1 types and hints and hints and types | 03-31-2022 22:55:57 | 03-31-2022 22:55:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looks perfect, thank you! |
transformers | 16,528 | closed | Update summary of the tasks | This PR updates the Summary of the tasks page with audio classification, ASR, and image classification. | 03-31-2022 22:36:48 | 03-31-2022 22:36:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,527 | closed | Fix t5 shard on TPU Pods | The current script doesn't work properly on a TPU pod because the global batch is not divided correctly per host.
This pull request fixes this issue by dividing the global batch to each host before it is shared on each host.
Fixes # (issue)
https://github.com/huggingface/transformers/issues/16470
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Models:
- t5: @patrickvonplaten, @patil-suraj
| 03-31-2022 19:10:10 | 03-31-2022 19:10:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This looks good to me!
@patil-suraj @borisdayma - could you take a look here?<|||||>Yes this approach works!<|||||>Thinking about it I think there could be some issues with last batch so we probably need to ensure they all have same number of items and that they are multiple of the number of local devices.<|||||>@borisdayma
This line already makes sure that all batches are of same length.
https://github.com/huggingface/transformers/blob/2831826bc60ee11b86179252ffc8401cb03c5904/examples/flax/language-modeling/run_t5_mlm_flax.py#L857 |
transformers | 16,526 | closed | Add utility to find model labels | # What does this PR do?
This PR adds a utility function to find the labels of a model. It will be useful for Keras and the Trainer (integration with the first is left for a follow-up PR). | 03-31-2022 18:04:50 | 03-31-2022 18:04:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @patrickvonplaten for knowledge |
transformers | 16,525 | closed | convert_tokens_to_string does not conform to its signature | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: macOS-11.6.4-x86_64-i386-64bit
- Python version: 3.9.10
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
@SaulLu
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): `AutoModelForQuestionAnswering`
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Question Answering
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Using the official example script (I will omit it, I will just post the result):
```shell
Question: How many pretrained models are available in 🤗 Transformers?
Answer: ['over', ' 32', ' +']
Question: What does 🤗 Transformers provide?
Answer: ['general', ' -', ' purpose', ' architecture', 's']
Question: 🤗 Transformers provides interoperability between which frameworks?
Answer: ['tensor', 'flow', ' 2', '.', ' 0', ' and', ' p', 'yt', 'or', 'ch']
```
Using the model in our context:
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
text = "Hello my browser is not working, I need help."
questions = [
"What is the issue?",
"What is the request?",
]
def extract_answer_idxs(start_logits, end_logits):
answer_start = torch.argmax(start_logits)
answer_end = torch.argmax(end_logits) + 1
return answer_start, answer_end
text = [text] * len(questions)
inputs = tokenizer(questions, text, add_special_tokens=True, return_tensors="pt", max_length=512, truncation=True)
input_ids = inputs["input_ids"].tolist()
outputs = model(**inputs)
idxs = map(
lambda x, y: extract_answer_idxs(x, y),
outputs.start_logits,
outputs.end_logits,
)
answers = list(
map(
lambda x, y: tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(x[y[0]:y[1]])),
input_ids,
(idx for idx in idxs),
)
)
print(f"Questions: {questions}")
print(f"Answers: {answers}")
```
Result:
```bash
Questions: ['What is the issue?', 'What is the request?']
Answers: [['my', ' browser', ' is', ' not', ' working'], ['help']]
```
(I also tried in a loop fashion and I get the same identical result.)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
```bash
Questions: ['What is the issue?', 'What is the request?']
Answers: ['my browser is not working', 'help']
```
As the [docs](https://huggingface.co/docs/transformers/main/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.convert_tokens_to_string) show, I expect a string and not a list of tokens.
Please notice how whitespaces are somehow introduced in some of the tokens.
Furthermore, some tokens are split e.g. `['tensor', 'flow', ' 2', '.', ' 0', ' and', ' p', 'yt', 'or', 'ch']`.
I expect `convert_tokens_to_string` to return a `str`, as it was previously. | 03-31-2022 17:11:29 | 03-31-2022 17:11:29 | Seems related to this issue https://github.com/huggingface/transformers/issues/16520<|||||>> Seems related to this issue #16520
Yeah, I see, I was probably opening the issue the same time you were - thanks for linking it to yours.<|||||>> > Seems related to this issue #16520
>
> Yeah, I see, I was probably opening the issue the same time you were - thanks for linking it to yours.
No problem. On the surface they both look different but it seems like the root issue relates to how tokens are converted to strings. <|||||>Thank you both for sharing your issues :hugs: !
You are indeed right, your problem was related to the same issue: a change in the format of the output given by the `decode` method of the `decoders` objects of the `tokenizers` library.
We have for the moment yanked the version of tokenizers 0.12.0 and in the process of releasing a new version 0.12.1 which reverts this change. Using a previous version of 0.12.0 or the future new 0.12.1 should solve this issue.
Sorry again for any problems this may have caused you :blush: <|||||>Thank you. I appreciate the rapid response on this!<|||||>Thank you @SaulLu 🤗 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,524 | closed | Request: TokenClassification pipeline batch processing over a sequence of already tokenised tests | I have a fine-tuned `model` which performs token classification, and a `tokenizer` which was built as:
`tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased")`
and this works fine in a pipeline when processing a single document/message:
```
nlp = pipeline(
"token-classification",
model=model,
tokenizer=tokenizer,
aggregation_strategy="first", # max, none, simple, average
binary_output=True,
ignore_labels=[],
)
text = ["Hello", "this", "is", "a", "single", "tokenized", "message"]
for token in nlp(text):
print(token)
[{'entity_group': 'LABEL_2', 'score': 0.07955505, 'word': 'Hello', 'start': 0, 'end': 5}]
[{'entity_group': 'LABEL_2', 'score': 0.06315145, 'word': 'this', 'start': 0, 'end': 4}]
[{'entity_group': 'LABEL_2', 'score': 0.08200004, 'word': 'is', 'start': 0, 'end': 2}]
[{'entity_group': 'LABEL_2', 'score': 0.07786057, 'word': 'a', 'start': 0, 'end': 1}]
[{'entity_group': 'LABEL_3', 'score': 0.056751117, 'word': 'single', 'start': 0, 'end': 6}]
[{'entity_group': 'LABEL_3', 'score': 0.10323574, 'word': 'tokenized', 'start': 0, 'end': 9}]
[{'entity_group': 'LABEL_3', 'score': 0.09412522, 'word': 'message', 'start': 0, 'end': 7}]
```
If I know try to pass a sequence of messages:
````
text = [
["Hello", "this", "is", "a", "single", "tokenized", "message"],
["another", "tokenized", "message"],
["short", "message"]
]
````
I was expecting I could do something like this:
```
for msg in nlp(text):
for entity in nlp(msg):
print(entity)
```
but I always end up with a `ValueError`:
`ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.`
Even if I initialise the tokenizer to be passed to the pipeline as this, I always get the same `Value Error`
```
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased", model_max_length=512, is_split_into_words=True, padding=True, truncation=True)
nlp = pipeline(
"token-classification",
model=model,
tokenizer=tokenizer,
aggregation_strategy="first"
binary_output=True,
ignore_labels=[],
)
```
Also if I "pad" the tokenized input like this
```
text = [
["Hello", "this", "is", "a", "single", "tokenized", "message"],
["another", "tokenized", "message", "[PAD]", "[PAD]", "[PAD]", "[PAD]"],
["short", "message", "[PAD]", "[PAD]", "[PAD]", "[PAD]", "[PAD]"]
]
for x in nlp(text, batch_size=3):
print(x)
```
From a quick read of the code of the pipeline it seems there's no way to instruct the tokeniser inside the TokenClassificationPipeline that the input is needs to be padded/truncated AND is already tokenised.
| 03-31-2022 15:54:22 | 03-31-2022 15:54:22 | Hi @davidsbatista ,
Sorry but you cannot pass pretokenized text to the pipelines. The error message is wrong.
You need to pass direct sentences to the pipeline.
```python
sentences = [
"Hello this is a single tokenized message",
"another tokenized message"
"short message"
]
for tokenized in nlp(sentences):
print(tokenized)
```
Because pipeline manages the tokenizer itself, it will chunk the sentence as the model expects, and there's no way to force other boundaries by "pretokenizing" sentences.
Why do you want to pass pretokenized sentences ? Was something wrong when attempting to send real sentences ?
Cheers !<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,523 | closed | Add Doc Test for BERT | # What does this PR do?
Add doc tests for BERT, a part of issue #16292
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten, @ydshieh
Documentation: @sgugger | 03-31-2022 15:45:18 | 03-31-2022 15:45:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, @vumichien,
Thank you for this PR. I currently only have a quick look, and overall it is good 😊.
I saw that your use different checkpoints for PyTorch & TensorFlow BERT models in some cases.
I understand it very well, because some checkpoints only exist in PyTorch version.
After an internal discussion, although we think it is totally fine, we would prefer to use the same checkpoints for the PyTorch & TensorFlow models. Therefore, I will try to convert some PyTorch checkpoints you used in this PR, and upload them to Hugging Face Hub. Once this is done, we can change the checkpoints in this PR, and update the expected values (which should be just about copying the values from the PyTorch side).
I will keep you updated 😊!<|||||>@ydshieh Thank you very much. I am looking for your update.
Besides that, could you please take a look at `modeling_mobilebert.py`? When I run `make fixup`, it asks me to run `make fix-copies` . If I follow, the model checkpoint, expected_output, expected_loss,.. will be copied from `bert` to `mobilebert`, which leads to unwanted results.
I think it is due to these hashtags:
https://github.com/huggingface/transformers/blob/9de70f213eb234522095cc9af7b2fac53afc2d87/src/transformers/models/mobilebert/modeling_mobilebert.py#L1315-L1318
https://github.com/huggingface/transformers/blob/9de70f213eb234522095cc9af7b2fac53afc2d87/src/transformers/models/mobilebert/modeling_mobilebert.py#L1419-L1422
https://github.com/huggingface/transformers/blob/9de70f213eb234522095cc9af7b2fac53afc2d87/src/transformers/models/mobilebert/modeling_mobilebert.py#L1517-L1520
So what should I do to avoid this problem?
<|||||>> # Copied from transformers.models.bert.modeling_bert.BertForQuestionAnswering with Bert->MobileBert all-casing
Hi, this is very tricky, and I need some discussion with the team.
This `# Copied from` thing is used to keep track the origin of some code blocks, and it's much better to keep it as many as possible. Thank you for pointing out this issue!
<|||||>@vumichien
I uploaded the TensorFlow checkpoint here
For `TFBertForSequenceClassification`:
"ydshieh/bert-base-uncased-yelp-polarity"
For `TFBertForQuestionAnswering`:
"ydshieh/bert-base-cased-squad2"
Whenever you have time, could you change the correspondence places in `modeling_tf_bert.py` to use these checkpoint?
Ping me if you encounter any difficulty.
Thanks!
<|||||>@ydshieh could you please check again your check point `ydshieh/bert-base-cased-squad2` for `TFBertForQuestionAnswering`?Every time I run, the output is different (maybe the weight of head is still random)<|||||>> @ydshieh could you please check again your check point `ydshieh/bert-base-cased-squad2` for `TFBertForQuestionAnswering`?Every time I run, the output is different (maybe the weight of head is still random)
~~OK, I will take a look today.~~
@vumichien
I checked it, and it turns out that I loaded the PyTorch (QA) checkpoint into a TF model for another task type!
Sorry about this, ~~I will upload the corrected TF checkpoint today~~, and keep you updated!
Uploaded the correct TF checkpoint 🙂 and tested it myself. The result now is always the same (and same as the PT result).<|||||>Hi, @vumichien ,
regarding [fix-copies](https://github.com/huggingface/transformers/pull/16523#issuecomment-1085865707) you mentioned in a previous comment: after some discussion, we think the best way is:
- In `Bert` model file:
- at the beginning part of the model file, set something like
- EXPECTED_OUTPUT_FOR_TOKEN_CLASSIFICATION = ...
- EXPECTED_LOSS_FOR_TOKEN_CLASSIFICATION = ...
- In `add_code_sample_docstrings` for`BertForTokenClassification`, use
- expected_output=EXPECTED_OUTPUT_FOR_TOKEN_CLASSIFICATION
- expected_loss=EXPECTED_LOSS
- In `MobileBert` model file:
- at the beginning part of the model file, set something like
- EXPECTED_OUTPUT_FOR_TOKEN_CLASSIFICATION = ... `(set to empty string if you don't want to work on doctest for MobileBert)`
- EXPECTED_LOSS_FOR_TOKEN_CLASSIFICATION = ... `(set to empty string if you don't want to work on doctest for MobileBert)`
- In `add_code_sample_docstrings` for`MobileBertForTokenClassification`, use
- expected_output=EXPECTED_OUTPUT_FOR_TOKEN_CLASSIFICATION
- expected_loss=EXPECTED_LOSS
And if you don't provide the values for `MobileBert`, just don't add it to `documentation_tests.txt`.
This should avoid the issue coming from `copies`. Let us know if you encounter any difficulty, thanks!
<|||||>Thank you very much @ydshieh. I will update the docs test following your instructions <|||||>Hi @ydshieh
- I have followed your instructions [define-expected-output](https://github.com/huggingface/transformers/pull/16523#issuecomment-1087875387) but the problem doesn't solve. Please help me check if I did something wrong.
- In this PR, I think it's better if I do both `bert` and `mobilebert` at the same time but it still requires [fix-copies](https://github.com/huggingface/transformers/pull/16523#issuecomment-1085865707).
- I am still waiting for your update for `The loss is always 0.0 (or nan) because the shape of input_ids is [1, 14] `
with `QuestionAnswering ` task 🙂. It leads to the problem with `MobileBertForQuestionAnswering` and `MobileBertForQuestionAnswering` as well<|||||>Hi @vumichien Let me check, both the fix-copies & loss 0.0 things.
<|||||>@vumichien
Regarding `fix-copies`, I probably should be more thorough in the previous comment.
One reason for the current PR version not working is:
In `BertForSequenceClassification`: you have
```python
checkpoint="textattack/bert-base-uncased-yelp-polarity",
```
however, in `MobileBertForSequenceClassification`, you have
```python
checkpoint="lordtt13/emo-mobilebert",
```
And this difference will be detected by `make fix-copies`.
In general, these situations could be solved using the same method:
- set `_CHECKPOINT_FOR_SEQUENCE_CLASSIFICATION = ...` at the beginning of the model file
- set `checkpoint=_CHECKPOINT_FOR_SEQUENCE_CLASSIFICATION ` in `xxxForSequenceClassification` model.
The same approach applies to other models like `xxxForQuestionAnswering`.
Could you try it and let me know if you still have problem regarding this, please?
In the meantime, I am going to check the loss 0.0.<|||||>@patrickvonplaten
I think we need to update `PT_QUESTION_ANSWERING_SAMPLE`.
The indices (hard-coded) below
```python
>>> target_start_index, target_end_index = torch.tensor([14]), torch.tensor([15])
```
actually depends on the different tokenizers. See the code snippet below for `Albert` v.s. `Roberta`.
Let me know your thought on this, thanks!
## Code Snippet
```python
from transformers import AlbertTokenizer, AlbertForQuestionAnswering, RobertaTokenizer
import torch
albert_checkpoint = "twmkn9/albert-base-v2-squad2"
roberta_checkpoint = "deepset/roberta-base-squad2"
albert_tokenizer = AlbertTokenizer.from_pretrained(f"{albert_checkpoint}")
roberta_tokenizer = RobertaTokenizer.from_pretrained(f"{roberta_checkpoint}")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
albert_input_ids = albert_tokenizer(question, text, return_tensors="pt").input_ids.numpy().tolist()[0]
roberta_input_ids = roberta_tokenizer(question, text, return_tensors="pt").input_ids.numpy().tolist()[0]
albert_decoded_tokens = albert_tokenizer.convert_ids_to_tokens(albert_input_ids)
roberta_decoded_tokens = roberta_tokenizer.convert_ids_to_tokens(roberta_input_ids)
# Albert
print(f"Albert: tokens = {albert_decoded_tokens}")
print(f"Albert: num_tokens = {len(albert_decoded_tokens)}")
print(f"Albert: position of `_nice`: {albert_decoded_tokens.index('▁nice')}\n")
# Roberta
print(f"Roberta: tokens = {roberta_decoded_tokens}")
print(f"Roberta: num_tokens = {len(roberta_decoded_tokens)}")
print(f"Roberta: position of `Ġnice`: {roberta_decoded_tokens.index('Ġnice')}")
```
### Outputs
```
Albert: tokens = ['[CLS]', '▁who', '▁was', '▁jim', '▁henson', '?', '[SEP]', '▁jim', '▁henson', '▁was', '▁a', '▁nice', '▁puppet', '[SEP]']
Albert: num_tokens = 14
Albert: position of `_nice`: 11
Roberta: tokens = ['<s>', 'Who', 'Ġwas', 'ĠJim', 'ĠH', 'enson', '?', '</s>', '</s>', 'Jim', 'ĠH', 'enson', 'Ġwas', 'Ġa', 'Ġnice', 'Ġpuppet', '</s>']
Roberta: num_tokens = 17
Roberta: position of `Ġnice`: 14
```<|||||>@ydshieh Thank you very much for your clear explanation. Now I understand why we should do it to get over the problem with `fix-copies` 🙏 <|||||>Regarding the issue mentioned [here](https://github.com/huggingface/transformers/pull/16523#issuecomment-1088635272), one solution is to pass `target_start_index` and `target_end_index` to the sample defined in `doc.py` (we need to update the sample though to accept these).
I don't think there is a super good heuristic to determine these targets in the sample directly. Even with these 2 tokenizers, we already have `'▁nice'` v.s. `'Ġnice'`.
Let's wait Patrick's response.
<|||||>Hey @ydshieh and @vumichien,
IMO the best thing we can and should do here is to let the user pass the label idx.<|||||>@vumichien You can ignore the failed test ` run_tests_hub `. I will check the PR later<|||||>@ydshieh Thank you very much<|||||>All good :-) I merge it now, thank you again ❤️ @vumichien <|||||>@ydshieh You, too. Thanks a lot for helping with this PR 🙏 |
transformers | 16,522 | closed | avoid nan loss in masked lm | # What does this PR do?
* Fixes a case where the loss returned from a masked LM head can be NaN, when the collator hasn't masked anything. This is mainly a risk at low batch sizes and short texts.
* This likely affects quite a lot of models, and may be the primary cause of infrequent nan warnings when tuning models.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Did you write any new necessary tests?
- Not yet, but tested on a small example, and confirmed NaN losses in validation and training are gone.
- [ ] If approved, port to other LM heads
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-31-2022 15:39:43 | 03-31-2022 15:39:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16522). All of your documentation changes will be reflected on that endpoint.<|||||>@LysandreJik please have a look and tell me if this is something you think should be expanded or ignored. Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,521 | closed | Add use_auth to load_datasets for private datasets to PT and TF examples | # What does this PR do?
As per https://github.com/huggingface/transformers/issues/16235 this PR adds the capability to run examples that use trainer on private datasets by adding the use_auth variable to the load_datasets function.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | 03-31-2022 15:03:23 | 03-31-2022 15:03:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger sure thing! I had already added for TF and have just added for Flax. Some examples didn't have `use_auth_token` as a model argument so I added that as well.<|||||>Thanks a lot for all your work on this! |
transformers | 16,520 | closed | Token classification pipeline results different with tokenizers==0.11.6 vs tokenizers==0.12.0 | I'm not sure if this is an issue with transformers, an issue with tokenizers or expected behavior. But when running the token classification pipeline with an aggregation_strategy="simple" the results are slightly different with tokenizers==0.12.0.
The following code produces different results (both examples use transformers==4.17.0).
```python
from transformers import pipeline
nlp = pipeline("token-classification")
nlp("Hugging Face Inc. is a company based in New York City", aggregation_strategy="simple")
```
With tokenizers==0.11.6:
```
[{'entity_group': 'ORG', 'score': 0.99305606, 'word': 'Hugging Face Inc', 'start': 0, 'end': 16}, {'entity_group': 'LOC', 'score': 0.9988098, 'word': 'New York City', 'start': 40, 'end': 53}]
```
With tokenizers==0.12.0:
```
[{'entity_group': 'ORG', 'score': 0.99305606, 'word': ['Hu', 'gging', ' Face', ' Inc'], 'start': 0, 'end': 16}, {'entity_group': 'LOC', 'score': 0.9988098, 'word': ['New', ' York', ' City'], 'start': 40, 'end': 53}]
```
| 03-31-2022 14:50:49 | 03-31-2022 14:50:49 | cc @SaulLu @Narsil <|||||>There's a proposed PR here : https://github.com/huggingface/transformers/pull/16537<|||||>Tokenization tests were run before releasing `0.12` but not the pipeline tests :(<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,519 | closed | [research] link to the XTREME-S paper | # What does this PR do?
Link to the XTREME-S paper before the benchmark announcement | 03-31-2022 14:31:16 | 03-31-2022 14:31:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,518 | closed | Enable doc in Spanish | # What does this PR do?
This PR reorganizes the structure of the docs folder to enable multilingual support and adds the building instructions to make the doc in English and Spanish on each PR/build. | 03-31-2022 14:27:04 | 03-31-2022 14:27:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,517 | closed | Use random_attention_mask for TF tests | # What does this PR do?
- Change TF's `random_attention_mask` to match its PT/Flax equivalence.
- Use `random_attention_mask` defined in `test_modeling_tf_common.py` to generate attention mask in TF tests.
- so TF code has the same logic as in PT/Flax tests (regarding this attention mask part in tests)
- avoid large difference between PT/TF outputs. (In particular, `TFGPT2EncoderDecoderModelTest` in [here](https://github.com/huggingface/transformers/issues/16497))
- In the case of ``TFBERTEncoderDecoderModelTest`` or `TFGPT2EncoderDecoderModelTest`, it is caused by some sequence in a batch which gets all 0s as attention mask (generated by `ids_tensor`) - may happens on both encoder and decoder (especially after combining with the causal mask).
## More context
Currently, most of TF tests still uses
```python
input_mask = ids_tensor([self.batch_size, self.seq_length], vocab_size=2)
```
while in PT/Flax tests, they call
```
input_mask = random_attention_mask([self.batch_size, self.seq_length])
```
(defined in the comment test file).
In particular, `random_attention_mask` has
```
# make sure that at least one token is attended to for each batch
attn_mask[:, -1] = 1
```
| 03-31-2022 14:08:39 | 03-31-2022 14:08:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Interesting. It moves the column of 1s for the start to the end and now becomes like a left-padded input. It could help with GPT-2, indeed
(If you are interested to know a bit more the detail, @gante )
Actually, moving 1 to the end will cause problem (when a model uses causal mask.). This is why I needed to update the code in `TFCLIPModelTest`.
In general, current library has a bit issue when the final attention mask (after combining the causal mask if any) received by the attention layer has a sequence (in the batch) having all 0s as mask. One thing (but maybe not only) involved is the different values (-1e4, -1e9, -1e30, -inf) used.
Put 1 at the start will avoid this situation (when combining the causal mask).
(But I don't want to change the PT/Flax logic in this PR. This should be addressed in a separate PR after discussion.)
Regarding the tests like `TFGPT2EncoderDecoderModelTest`, this PR only helps partially (the encoder part). The decoder part needs extra logic for now (to address the above situation regarding the causal mask)
<|||||>Hi, @Rocketknight1,
Yes, all the points are right -- except
- I am not sure about this statement `and slightly increases the expected number of unmasked tokens in each input.`: I would say not this case, but I might misunderstand the sentence.
- `but guarantees that at least one token will have a value of 1`:
- Yes, but not guarantee the same thing for the final attention mask used by attention layers to compute the softmax - because the final mask might be the one after **combining the causal mask** (for decoder models).
- Some more future PRs to improve these kinds of things.
<|||||>That makes sense! And my comment about "increases the expected number of unmasked tokens" was just an irrelevant observation - the average number of unmasked tokens is very slightly larger since we guarantee that one of them will have value 1. Ignore me! |
transformers | 16,516 | closed | Fix syntax error in generate docstrings | # What does this PR do?
Fix a syntax error in the generate docstring.
Fixes #16515 | 03-31-2022 12:06:01 | 03-31-2022 12:06:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,515 | closed | Typo in documentation. | Under parameters for `GenerationMixin.generate()`
there is no new bullet point for `prefix_allowed_tokens_fn`, and the content for it is under [`diversity_penalty`](https://huggingface.co/docs/transformers/v4.13.0/en/main_classes/model#transformers.generation_utils.GenerationMixin.generate.diversity_penalty).
This typo only exists for versions after the document UI update(>= 4.13.0).
@sgugger | 03-31-2022 10:10:28 | 03-31-2022 10:10:28 | Thanks for flagging, the PR mentioned above will fix this issue. |
transformers | 16,514 | closed | Adding Bigscience 176B parameters model | # What does this PR do?
This PR add BigScience 176B parameters model to transformers.
The model architecture is detailed here: https://bigscience.notion.site/BigScience-176B-Model-Training-ad073ca07cdf479398d5f95d88e218c4
The training of the model can be followed here: https://twitter.com/BigScienceLLM
The model is trained using 3D parallelism with a combination of data (DP), pipeline (PP) and tensor (TP) parallelisms (see an illustration [here](https://twitter.com/BigScienceLLM/status/1506588988278198273?s=20&t=mxyFZQvVTM3YFDaQeHvlHQ)). The training code is a fork of NVIDIA/Microsoft Megatron-DeepSpeed and can be found here: https://github.com/bigscience-workshop/Megatron-DeepSpeed)
For integration in transformers, the following approach is being currently taken:
- focus on using the model in inference only (will impact PP questions - no backward pass, no bubble - may also impact the need for TP - we can also focus on batch-size=1 for now => no DP)
- try to be able to use the model on a rather small workstation (with all caveats but goal is to get the integration to work on less than the 48 GPUs currently used for training)
- be very pragmatic - we can always develop optimized version of the model/code later
# Current status:
This PR is still a not working draft.
During the integration a potential bug in the original training code was discovered where the layer norms on the parallel branches of the TP direction were not identical unlike expected. The integration was then stop to investigate the situation and should start again when the issue is solved on the training side.
| 03-31-2022 09:51:00 | 03-31-2022 09:51:00 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,513 | closed | Onnx export Data2vexAudio ValueError: Model and config inputs doesn't match |
- data2vec: @patrickvonplaten, @anton-l @edugp
The problem arises when using:
* [ ] my own modified scripts: (give details below)
```
from typing import Mapping, OrderedDict
from transformers.onnx import OnnxConfig
from transformers import AutoConfig
from pathlib import Path
from transformers.onnx import export
from transformers import AutoTokenizer, AutoModel
class Data2VecAudioOnnxConfig(OnnxConfig):
@property
def inputs(self):
return OrderedDict(
[
("input_values", {0: "batch", 1: "sequence"}),
("attention_mask", {0: "batch", 1: "sequence"}),
]
)
config = AutoConfig.from_pretrained("facebook/data2vec-audio-base-960h")
onnx_config = Data2VecAudioOnnxConfig(config)
onnx_path = Path("facebook/data2vec-audio-base-960h")
model_ckpt = "facebook/data2vec-audio-base-960h"
base_model = AutoModel.from_pretrained(model_ckpt)
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
errors throws
```
ValueError Traceback (most recent call last)
/var/folders/2t/0w65vdjs2m32w5mmzzgtqrhw0000gn/T/ipykernel_59977/667985886.py in <module>
27 tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
28
---> 29 onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
~/miniconda3/lib/python3.9/site-packages/transformers/onnx/convert.py in export(tokenizer, model, config, opset, output)
255
256 if is_torch_available() and issubclass(type(model), PreTrainedModel):
--> 257 return export_pytorch(tokenizer, model, config, opset, output)
258 elif is_tf_available() and issubclass(type(model), TFPreTrainedModel):
259 return export_tensorflow(tokenizer, model, config, opset, output)
~/miniconda3/lib/python3.9/site-packages/transformers/onnx/convert.py in export_pytorch(tokenizer, model, config, opset, output)
112
113 if not inputs_match:
--> 114 raise ValueError("Model and config inputs doesn't match")
115
116 config.patch_ops()
ValueError: Model and config inputs doesn't match
```
| 03-31-2022 09:47:38 | 03-31-2022 09:47:38 | Wow very cool to see that you guys are already working on Onnx and Data2VecAudio :partying_face:
Pinging @anton-l and @lewtun here :-) <|||||>Hi @xiadingZ this is cool indeed!
I think the error message is coming from the fact that the base `generate_dummy_inputs()` method of `OnnxConfig` doesn't return inputs that match those the model expects. What I suggest is overriding this method to produce the desired inputs - you can check out the configuration files of models like LayoutLM to see what we've done for these type of modalities<|||||>@lewtun thanks very much! I succeed exporting `Data2vecAudio` to `onnx`. But `onnxruntime` can't load this model because of `shape inference`, can you give some suggestion?
code
```
from collections import Mapping, OrderedDict
from transformers.onnx import OnnxConfig
from transformers import AutoConfig,PretrainedConfig, PreTrainedTokenizer, TensorType
from pathlib import Path
from transformers.onnx import export
from transformers import AutoTokenizer, AutoModel
import torch
class Data2VecAudioOnnxConfig(OnnxConfig):
@property
def inputs(self):
return OrderedDict(
[
("input_values", {0: "batch", 1: "sequence"}),
("attention_mask", {0: "batch", 1: "sequence"}),
]
)
def generate_dummy_inputs(
self,
tokenizer: PreTrainedTokenizer,
batch_size: int = -1,
seq_length: int = 151680,
is_pair: bool = False,
framework = None,
):
"""
Generate inputs to provide to the ONNX exporter for the specific framework
Args:
tokenizer: The tokenizer associated with this model configuration
batch_size: The batch size (int) to export the model for (-1 means dynamic axis)
seq_length: The sequence length (int) to export the model for (-1 means dynamic axis)
is_pair: Indicate if the input is a pair (sentence 1, sentence 2)
framework: The framework (optional) the tokenizer will generate tensor for
Returns:
Mapping[str, Tensor] holding the kwargs to provide to the model's forward function
"""
input_dict = super().generate_dummy_inputs(tokenizer, batch_size, seq_length, is_pair, framework)
if not framework == TensorType.PYTORCH:
raise NotImplementedError("Exporting Data2VecAudio to ONNX is currently only supported for PyTorch.")
batch_size, seq_length = input_dict["input_ids"].shape
input_dict["input_values"] = input_dict["input_ids"].to(torch.float)
input_dict['attention_mask'] = input_dict['attention_mask'].to(torch.int)
del input_dict['input_ids']
return input_dict
model_ckpt = "data2vec_model"
base_model = AutoModel.from_pretrained(model_ckpt)
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
config = AutoConfig.from_pretrained("data2vec_model")
onnx_config = Data2VecAudioOnnxConfig(config)
onnx_path = Path("model.onnx")
onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, 14, onnx_path)
import onnxruntime
ort_session = onnxruntime.InferenceSession("model.onnx",providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'])
print("Exported model has been tested with ONNXRuntime, and the result looks good!")
```
errors
```
---------------------------------------------------------------------------
RuntimeException Traceback (most recent call last)
<ipython-input-6-5b62b3b0329e> in <module>
1 import onnxruntime
2
----> 3 ort_session = onnxruntime.InferenceSession("model.onnx",providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'])
4 print("Exported model has been tested with ONNXRuntime, and the result looks good!")
/opt/conda/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in __init__(self, path_or_bytes, sess_options, providers, provider_options, **kwargs)
333
334 try:
--> 335 self._create_inference_session(providers, provider_options, disabled_optimizers)
336 except ValueError:
337 if self._enable_fallback:
/opt/conda/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in _create_inference_session(self, providers, provider_options, disabled_optimizers)
377
378 # initialize the C++ InferenceSession
--> 379 sess.initialize_session(providers, provider_options, disabled_optimizers)
380
381 self._sess = sess
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:925 SubGraphCollection_t
onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const
onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: 573 has no shape specified.
Please run shape inference on the onnx model first. Details can be found in
https://www.onnxruntime.ai/docs/reference/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-
tensorrt-subgraphs
```<|||||>Hi @xiadingZ this looks like an issue with the TensorRT backend not supporting all operators in the ONNX graph: https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs
To eliminate other causes, could you run the inference session with just `CPUExecutionProvider`?<|||||>yes, you are right @lewtun . inference session can succeed with `'CUDAExecutionProvider', 'CPUExecutionProvider'`.
can you give some suggestion about how to eliminate this error with `TensorrtExecutionProvider `, such as find the unsupported operator and write the converter to tensor-rt<|||||>Hi @xiadingZ my suggestion would be to try feeding your exported ONNX model through this script: https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/symbolic_shape_infer.py
That should help infer the tensor shapes, but please report back to let us know if it works or not. We might have to take a closer look at the problematic ops in the graph to enable support for TensorRT out of the box<|||||>Hi @lewtun. It throws error:
```
Traceback (most recent call last):
File "onnxruntime/onnxruntime/python/tools/symbolic_shape_infer.py", line 2117, in <module>
args.guess_output_rank, args.verbose)
File "onnxruntime/onnxruntime/python/tools/symbolic_shape_infer.py", line 2083, in infer_shapes
raise Exception("Incomplete symbolic shape inference")
Exception: Incomplete symbolic shape inference
```
You can try with this script:
```
from collections import Mapping, OrderedDict
from transformers.onnx import OnnxConfig
from transformers import AutoConfig,PretrainedConfig, PreTrainedTokenizer, TensorType
from pathlib import Path
from transformers.onnx import export
from transformers import AutoTokenizer, AutoModel
import torch
import onnxruntime
from transformers import Wav2Vec2Processor, Data2VecAudioModel, Data2VecAudioConfig
class Data2VecAudioOnnxConfig(OnnxConfig):
@property
def inputs(self):
return OrderedDict(
[
("input_values", {0: "batch", 1: "sequence"}),
("attention_mask", {0: "batch", 1: "sequence"}),
]
)
def generate_dummy_inputs(
self,
tokenizer: PreTrainedTokenizer,
batch_size: int = -1,
seq_length: int = 151680,
is_pair: bool = False,
framework = None,
):
"""
Generate inputs to provide to the ONNX exporter for the specific framework
Args:
tokenizer: The tokenizer associated with this model configuration
batch_size: The batch size (int) to export the model for (-1 means dynamic axis)
seq_length: The sequence length (int) to export the model for (-1 means dynamic axis)
is_pair: Indicate if the input is a pair (sentence 1, sentence 2)
framework: The framework (optional) the tokenizer will generate tensor for
Returns:
Mapping[str, Tensor] holding the kwargs to provide to the model's forward function
"""
input_dict = super().generate_dummy_inputs(tokenizer, batch_size, seq_length, is_pair, framework)
if not framework == TensorType.PYTORCH:
raise NotImplementedError("Exporting Data2VecAudio to ONNX is currently only supported for PyTorch.")
batch_size, seq_length = input_dict["input_ids"].shape
input_dict["input_values"] = input_dict["input_ids"].to(torch.float)
input_dict['attention_mask'] = input_dict['attention_mask'].to(torch.int)
del input_dict['input_ids']
return input_dict
#model_ckpt = "data2vec_model"
model_ckpt = "facebook/data2vec-audio-base-960h"
base_model = AutoModel.from_pretrained(model_ckpt)
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
config = AutoConfig.from_pretrained("facebook/data2vec-audio-base-960h")
onnx_config = Data2VecAudioOnnxConfig(config)
onnx_path = Path("model.onnx")
onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, 14, onnx_path)
```<|||||>Hello, @lewtun . I try with this command and it succeed, doesn't throw error.
```
python onnxruntime/onnxruntime/python/tools/symbolic_shape_infer.py --input model.onnx --output new_model.onnx --auto_merge
```
but I run onnxruntime infer with `CUDAExecutionProvider`, it throws `Ops error`
```
import soundfile as sf
import onnxruntime
# sample_rate is 16000
singnal, sample_rate = sf.read('test.flac')
ort_session = onnxruntime.InferenceSession("model.onnx",providers=['CUDAExecutionProvider'])
inputs_onnx = {k: v.cpu().detach().numpy() for k, v in inputs.items()}
ort_outs = ort_session.run(None, inputs_onnx)
print("Exported model has been tested with ONNXRuntime, and the result looks good!")
```
error is:
```
2022-04-19 14:31:13.232096464 [E:onnxruntime:, sequential_executor.cc:364 Execute] Non-zero status code returned while running Where node. Name:'Where_389' Status Message: Where_389: condition operand cannot broadcast on dim 1 Condition Shape: {1,292}, X Shape: {1,292,768}, Y Shape: {}
---------------------------------------------------------------------------
Fail Traceback (most recent call last)
Input In [11], in <cell line: 7>()
3 ort_session = onnxruntime.InferenceSession("model.onnx",providers=['CUDAExecutionProvider'])
4 inputs_onnx = {k: v.cpu().detach().numpy() for k, v in inputs.items()}
----> 7 ort_outs = ort_session.run(None, inputs_onnx)
9 # compare ONNX Runtime and PyTorch results
10 np.testing.assert_allclose(to_numpy(outputs[0]), ort_outs[0], rtol=1e-03, atol=1e-05)
File /opt/conda/envs/py39/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:192, in Session.run(self, output_names, input_feed, run_options)
190 output_names = [output.name for output in self._outputs_meta]
191 try:
--> 192 return self._sess.run(output_names, input_feed, run_options)
193 except C.EPFail as err:
194 if self._enable_fallback:
Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Where node. Name:'Where_389' Status Message: Where_389: condition operand cannot broadcast on dim 1 Condition Shape: {1,292}, X Shape: {1,292,768}, Y Shape: {}
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,512 | closed | ONNX exported BART model with seq2seq-lm-with-past feature produces error on the initial run. | ## Environment info
- `transformers` version: 4.17.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.13
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@mfuntowicz
@lewtun
@patil-suraj
## Information
Model I am using (Bert, XLNet ...): BART (facebook/bart-base)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
I am trying to use BART model extracted to ONNX with feature seq2seq-lm-with-past.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Extract model with `python -m transformers.onnx --model=facebook/bart-base --feature seq2seq-lm-with-past exported2 --opset 12 --atol 0.0001`
2. Run script below
```python
import onnxruntime as rt
import numpy as np
sess_options = rt.SessionOptions()
sess_options.graph_optimization_level = (
rt.GraphOptimizationLevel.ORT_DISABLE_ALL
)
model = rt.InferenceSession("exported2\model.onnx", sess_options=sess_options,)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("npc-engine/bart-base-mse-light")
DTYPE_MAP = {
'tensor(int64)': np.int64,
'tensor(float)': np.float32,
'tensor(double)': np.float64,
'tensor(int32)': np.int32,
}
def create_starter_inputs(prompt: str = "", is_encdec: bool = False):
"""Create starter inputs for the model.
Args:
prompt: Prompt to start generation from.
Returns:
Dict of inputs to the model
"""
model_inputs = model.get_inputs()
tokens = tokenizer.encode(prompt)
inputs = {}
dtypes = {i.name: DTYPE_MAP[i.type] for i in model_inputs}
if is_encdec:
prompt_start = tokens[-1]
inputs['input_ids'] = np.asarray(tokens[:-1], dtype=dtypes['input_ids']).reshape([1, -1])
inputs['decoder_input_ids'] = np.asarray([prompt_start, prompt_start], dtype=dtypes['decoder_input_ids']).reshape([1, 2])
inputs['attention_mask'] = np.ones_like(inputs['input_ids'], dtype=dtypes['attention_mask'])
inputs['decoder_attention_mask'] = np.ones_like(inputs['decoder_input_ids'], dtype=dtypes['decoder_attention_mask'])
inputs['decoder_attention_mask'][0][-1] = 0
else:
inputs['input_ids'] = np.asarray(tokens, dtype=dtypes['input_ids']).reshape([1, -1])
inputs['attention_mask'] = np.ones_like(inputs['input_ids'], dtype=dtypes['attention_mask'])
shape_dict = {
"batch": 1,
"past_encoder_sequence": 0,
"past_decoder_sequence": 0,
"past_sequence + sequence": 0,
}
for i in model_inputs:
if "past_key_values" in i.name:
shape_tuple = [shape_dict.get(dim, dim) for dim in i.shape]
inputs[i.name] = np.empty(shape_tuple, dtype=dtypes[i.name])
return inputs
inp = create_starter_inputs("Hello", True)
model.run(None, inp)
```
This produces an error
```
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_945' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\cpu\tensor\reshape_helper.h:26 onnxruntime::ReshapeHelper::ReshapeHelper i < input_shape.NumDimensions() was false. The dimension with value zero exceeds the dimension size of the input tensor.
```
GPT-2 exported model with causal-lm-with-past feature works with the same function (with is_encdec = False).
## Expected behavior
It should be producing output from ONNX model.
Also model expects decoder_input_ids of shape (batch, 2) but should be (batch, 1).
| 03-31-2022 08:35:02 | 03-31-2022 08:35:02 | Fixing model `decoder_input_ids` expected shape from (batch, 2) to (batch, 1) doesn't help with this error.<|||||>Changing the shape of PKVs and masks changes error.
Here is the snippet to modify input shapes:
```python
BATCH = 1
PAST_SEQ_LEN = 2
model_inputs = model.get_inputs()
dtypes = {i.name: DTYPE_MAP[i.type] for i in model_inputs}
inp['attention_mask'] = np.ones([BATCH , PAST_SEQ_LEN + 4], dtype=dtypes['attention_mask'])
inp['decoder_attention_mask'] = np.ones([BATCH, PAST_SEQ_LEN + 1], dtype=dtypes['decoder_attention_mask'])
shape_dict = {
"batch": BATCH ,
"past_encoder_sequence": PAST_SEQ_LEN ,
"past_decoder_sequence": PAST_SEQ_LEN ,
"past_sequence + sequence": 0,
}
for i in model_inputs:
if "past_key_values" in i.name:
shape_tuple = [shape_dict.get(dim, dim) for dim in i.shape]
inp[i.name] = np.empty(shape_tuple, dtype=dtypes[i.name])
```
Run it after input dict creation and before model execution
1.
```python
BATCH = 1
PAST_SEQ_LEN = 2
```
produces error
```
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Add node. Name:'Add_1766' Status Message: D:\a\_work\1\s\onnxruntime\core/providers/cpu/math/element_wise_ops.h:503 onnxruntime::BroadcastIterator::Init axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 2 by 6
```
2.
```python
BATCH = 1
PAST_SEQ_LEN = 1
```
produces error
```
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_1776' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\cpu\tensor\reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,16,1,5}, requested shape:{16,1,1}
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,511 | closed | decoder | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 03-31-2022 08:25:54 | 03-31-2022 08:25:54 | |
transformers | 16,510 | closed | CANINE model gets different logits for different batch sizes | ## Environment info
- `transformers` version: 4.17.0
- Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.13
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@NielsRogge
## Information
Model I am using: CANINE
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: Simple script that is passing some input to the model and gets logits.
The tasks I am working on is: NA, there is no task, I'm just trying to get logits from the model.
## To reproduce
Steps to reproduce the behavior:
1. Create CANINE tokenizer.
2. Create CANINE model for sequence classification with 2 labels.
3. Pass the same input to the model with multiple batch sizes.
4. For the same example, you will get different logits.
```python
import torch
from transformers import AutoTokenizer
from transformers import AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('google/canine-c')
model = AutoModelForSequenceClassification.from_pretrained('google/canine-c', num_labels=2)
text = ['a', 'ab', 'abc', 'abcd', 'abcde', 'abcdef']
encoded = tokenizer(text, padding=True, return_tensors='pt')
model.eval()
with torch.no_grad():
print(model(**encoded))
# Output logits:
# "a": [0.1325, 0.1801]
# "ab": [0.1764, 0.2585]
# "abc": [0.0522, 0.0851]
# "abcd": [0.0701, 0.1577]
# "abcde": [0.1030, 0.0847]
# "abcdef": [0.0565, 0.1216]
encoded = tokenizer(text[:3], padding=True, return_tensors='pt')
model.eval()
with torch.no_grad():
print(model(**encoded))
# Output logits:
# "a": [0.1325, 0.1801]
# "ab": [0.1764, 0.2585]
# "abc": [0.1694, 0.2958] -> This is different from the first ([0.0522, 0.0851]) output.
encoded = tokenizer(text[1:4], padding=True, return_tensors='pt')
model.eval()
with torch.no_grad():
print(model(**encoded))
# Output logits:
# "ab": [0.1764, 0.2585]
# "abc": [0.1694, 0.2958] -> This is different from the first ([0.0522, 0.0851]) output.
# "abcd": [0.0552, 0.2409] -> This is different from the first ([0.0701, 0.1577]) output.
```
## Expected behavior
I expect the logits to be the same regardless the batch size passed to the model.
| 03-31-2022 06:59:13 | 03-31-2022 06:59:13 | Note: I tested the same code with `microsoft/mdeberta-v3-base` model and I got the same output every time.<|||||>Hi @NielsRogge, could you please take a look ?.?<|||||>Hi,
Thanks for raising this issue. It probably has to do with the fact that CANINE internally downsamples the sequence length first (from characters to what the authors call "molecules"), before forwarding the input through the Transformer encoder. This isn't the case for regular Transformer encoders like BERT or mDeBERTa.
So when you provide:
```
text = ['a', 'ab', 'abc', 'abcd', 'abcde', 'abcdef']
```
to `CanineTokenizer`, it will pad them all up to the length of 'abcdef'. As also the [CLS] and [PAD] tokens are added, each sequence will be padded up to 6 + 2 = 8 tokens (6 because there are 6 characters in 'abcdef').
However, when only taking:
```
text = ['a', 'ab', 'abc']
```
, then everything will be padded up to 3 + 2 = 5 tokens.
The CANINE model has the `downsampling_rate` attribute in its config, which is set to 4. Hence, when forwarding the 8 tokens through the model, the 8 tokens will get downsampled to a sequence of 8/4 = 2 tokens, which are forward through the Transformer encoder. Hence the output of `CanineModel` will have a sequence length of 2.
However, when providing 5 tokens to the model, these will get downsampled to 5 // 2 = 2 tokens as well, but in a different way (due to the hashing technique, of which the details can be found in [this class](https://github.com/huggingface/transformers/blob/60d27b1f152c181705191765661967fef3016cef/src/transformers/models/canine/modeling_canine.py#L298)). This causes the result to be different.<|||||>I have two questions here:
1. What is the need of the attention mask if it will not discard the padding?
2. If I set the down sampling rate to 1, I will get the same output each time, right?
Thanks for your reply!<|||||>I just realized that if I want to set the down sampling rate to 1, I need to retrain the model, as the checkpoint will not be able to load. This is a problem actually (I mean the down sampling rate), because it affects the accuracy of the model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,509 | closed | Tensorflow language modeling example doesn't work with TPU in Colab | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyTorch version (GPU?): 1.10.0+cu111 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: TPU with Colab
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@Rocketknight1 @gante
## Information
Model I am using (Bert, XLNet ...): distilbert-base-cased
The problem arises when using:
* [x] the official example scripts: https://github.com/huggingface/transformers/raw/v4.17.0/examples/tensorflow/language-modeling/run_mlm.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] the official example dataset: wikitext
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Open new Colab notebook session with TPU accelerator. (The example script is able to run normally on CPU in Colab following the same steps below.)
2. Run example language modeling script that matches installed transformer version with
```
!pip install transformers datasets
!wget https://github.com/huggingface/transformers/raw/v4.17.0/examples/tensorflow/language-modeling/run_mlm.py
!python run_mlm.py \
--model_name_or_path distilbert-base-cased \
--output_dir output \
--dataset_name wikitext \
--dataset_config_name wikitext-103-raw-v1
```
4. Model training fails with error message:
```
Traceback (most recent call last):
File "run_mlm.py", line 562, in <module>
main()
File "run_mlm.py", line 537, in main
callbacks=[SavePretrainedCallback(output_dir=training_args.output_dir)],
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py", line 1191, in _numpy
raise core._status_to_exception(e) from None # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InternalError: 6 root error(s) found.
(0) INTERNAL: {{function_node __inference_train_function_33804}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
```
5. Full log:
```
2022-03-31 03:54:06.522866: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
WARNING:__main__:We are training on TPU - forcing pad_to_max_length
Downloading builder script: 8.48kB [00:00, 5.75MB/s]
Downloading metadata: 6.84kB [00:00, 4.88MB/s]
Downloading and preparing dataset wikitext/wikitext-103-raw-v1 (download: 183.09 MiB, generated: 523.53 MiB, post-processed: Unknown size, total: 706.63 MiB) to /root/.cache/huggingface/datasets/wikitext/wikitext-103-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126...
Downloading data: 100% 192M/192M [00:04<00:00, 44.6MB/s]
Dataset wikitext downloaded and prepared to /root/.cache/huggingface/datasets/wikitext/wikitext-103-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126. Subsequent calls will reuse this data.
100% 3/3 [00:00<00:00, 195.25it/s]
https://huggingface.co/distilbert-base-cased/resolve/main/config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpsvndtibf
Downloading: 100% 411/411 [00:00<00:00, 354kB/s]
storing https://huggingface.co/distilbert-base-cased/resolve/main/config.json in cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a
creating metadata file for /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a
loading configuration file https://huggingface.co/distilbert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a
Model config DistilBertConfig {
"_name_or_path": "distilbert-base-cased",
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"output_past": true,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"transformers_version": "4.17.0",
"vocab_size": 28996
}
https://huggingface.co/distilbert-base-cased/resolve/main/tokenizer_config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpq4ymtf69
Downloading: 100% 29.0/29.0 [00:00<00:00, 18.1kB/s]
storing https://huggingface.co/distilbert-base-cased/resolve/main/tokenizer_config.json in cache at /root/.cache/huggingface/transformers/81e970e5e6ec68be12da0f8f3b2f2469c78d579282299a2ea65b4b7441719107.ec5c189f89475aac7d8cbd243960a0655cfadc3d0474da8ff2ed0bf1699c2a5f
creating metadata file for /root/.cache/huggingface/transformers/81e970e5e6ec68be12da0f8f3b2f2469c78d579282299a2ea65b4b7441719107.ec5c189f89475aac7d8cbd243960a0655cfadc3d0474da8ff2ed0bf1699c2a5f
loading configuration file https://huggingface.co/distilbert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a
Model config DistilBertConfig {
"_name_or_path": "distilbert-base-cased",
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"output_past": true,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"transformers_version": "4.17.0",
"vocab_size": 28996
}
https://huggingface.co/distilbert-base-cased/resolve/main/vocab.txt not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmps9iefq03
Downloading: 100% 208k/208k [00:00<00:00, 1.82MB/s]
storing https://huggingface.co/distilbert-base-cased/resolve/main/vocab.txt in cache at /root/.cache/huggingface/transformers/ba377304984dc63e3ede0e23a938bbbf04d5c3835b66d5bb48343aecca188429.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791
creating metadata file for /root/.cache/huggingface/transformers/ba377304984dc63e3ede0e23a938bbbf04d5c3835b66d5bb48343aecca188429.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791
https://huggingface.co/distilbert-base-cased/resolve/main/tokenizer.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp5rvetabd
Downloading: 100% 426k/426k [00:00<00:00, 3.12MB/s]
storing https://huggingface.co/distilbert-base-cased/resolve/main/tokenizer.json in cache at /root/.cache/huggingface/transformers/acb5c2138c1f8c84f074b86dafce3631667fccd6efcb1a7ea1320cf75c386a36.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6
creating metadata file for /root/.cache/huggingface/transformers/acb5c2138c1f8c84f074b86dafce3631667fccd6efcb1a7ea1320cf75c386a36.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6
loading file https://huggingface.co/distilbert-base-cased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/ba377304984dc63e3ede0e23a938bbbf04d5c3835b66d5bb48343aecca188429.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791
loading file https://huggingface.co/distilbert-base-cased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/acb5c2138c1f8c84f074b86dafce3631667fccd6efcb1a7ea1320cf75c386a36.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6
loading file https://huggingface.co/distilbert-base-cased/resolve/main/added_tokens.json from cache at None
loading file https://huggingface.co/distilbert-base-cased/resolve/main/special_tokens_map.json from cache at None
loading file https://huggingface.co/distilbert-base-cased/resolve/main/tokenizer_config.json from cache at /root/.cache/huggingface/transformers/81e970e5e6ec68be12da0f8f3b2f2469c78d579282299a2ea65b4b7441719107.ec5c189f89475aac7d8cbd243960a0655cfadc3d0474da8ff2ed0bf1699c2a5f
loading configuration file https://huggingface.co/distilbert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a
Model config DistilBertConfig {
"_name_or_path": "distilbert-base-cased",
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"output_past": true,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"transformers_version": "4.17.0",
"vocab_size": 28996
}
Running tokenizer on every text in dataset: 60% 3/5 [00:00<00:00, 5.09ba/s]Token indices sequence length is longer than the specified maximum sequence length for this model (546 > 512). Running this sequence through the model will result in indexing errors
Running tokenizer on every text in dataset: 100% 5/5 [00:00<00:00, 6.13ba/s]
Running tokenizer on every text in dataset: 100% 1802/1802 [05:23<00:00, 5.57ba/s]
Running tokenizer on every text in dataset: 100% 4/4 [00:00<00:00, 6.25ba/s]
Grouping texts in chunks of 512: 100% 5/5 [00:00<00:00, 8.99ba/s]
Grouping texts in chunks of 512: 100% 1802/1802 [03:46<00:00, 7.95ba/s]
Grouping texts in chunks of 512: 100% 4/4 [00:00<00:00, 7.70ba/s]
INFO:__main__:Validation file not found: using 5% of the dataset as validation as provided in data_args
INFO:__main__:Sample 167621 of the training set: {'input_ids': [23782, 1118, 170, 10268, 1210, 1553, 1768, 2273, 1118, 26835, 1389, 1107, 1103, 1509, 5251, 119, 102, 101, 102, 101, 134, 134, 134, 1305, 1264, 134, 134, 134, 102, 101, 102, 101, 1153, 1189, 1123, 1569, 1264, 1963, 1107, 1384, 1165, 1131, 3042, 1107, 1103, 146, 2924, 26447, 26298, 2348, 119, 1153, 1108, 2700, 1106, 4248, 1754, 1120, 1103, 1371, 3396, 3854, 2348, 1107, 1803, 117, 1141, 1104, 1565, 2139, 1150, 1307, 1111, 1103, 4317, 2883, 4553, 6838, 1107, 1103, 160, 2249, 2924, 13360, 119, 1130, 1351, 1333, 117, 1131, 1307, 1107, 170, 1210, 137, 118, 137, 1342, 2774, 1326, 1222, 1860, 119, 1130, 1333, 117, 1131, 1108, 170, 1420, 1104, 1103, 1264, 1115, 1307, 1107, 1103, 13586, 1635, 119, 1153, 2533, 1754, 1120, 1103, 1333, 1291, 2708, 1187, 1123, 1264, 1845, 2223, 119, 102, 101, 102, 101, 134, 134, 134, 134, 16541, 134, 134, 134, 134, 102, 101, 102, 101, 1153, 1108, 1226, 1104, 1103, 4394, 3086, 137, 118, 137, 2183, 1754, 1535, 112, 188, 1569, 14980, 3163, 1264, 117, 1227, 1112, 1103, 144, 18498, 1733, 117, 1120, 1103, 1369, 2659, 16541, 119, 1430, 1264, 2378, 1803, 4389, 137, 118, 137, 3862, 1107, 6957, 1147, 3086, 119, 1153, 1163, 1104, 1123, 1264, 112, 188, 1369, 2099, 117, 107, 1284, 1589, 1487, 1112, 170, 1264, 1541, 1218, 1105, 1412, 3086, 1110, 170, 4755, 1106, 170, 1974, 1104, 1662, 1250, 1105, 13314, 119, 107, 102, 101, 1130, 1357, 1349, 117, 1131, 1108, 1417, 1112, 1226, 1104, 1103, 2682, 1569, 4322, 1115, 1156, 4845, 1120, 1103, 19099, 6045, 2348, 1111, 1103, 1368, 2659, 16541, 1107, 1498, 119, 1153, 1108, 1103, 3495, 1104, 1103, 144, 18498, 1733, 1120, 1103, 1368, 2659, 16541, 119, 1130, 1103, 2284, 3086, 1342, 1222, 1860, 117, 1131, 1307, 1492, 131, 5507, 1904, 119, 1430, 1264, 1575, 3140, 137, 118, 137, 4650, 117, 1133, 2829, 170, 2878, 3086, 119, 1153, 2297, 122, 1553, 1105, 1125, 1300, 11174, 1107, 1103, 1342, 119, 102, 101, 102, 101, 134, 134, 2825, 7745, 1158, 134, 134, 102, 101, 102, 101, 1109, 144, 18498, 1733, 2604, 1106, 7044, 1111, 1103, 1446, 2659, 16541, 1107, 5470, 1260, 11502, 119, 26835, 1389, 1261, 1146, 21291, 1158, 117, 8692, 1118, 10906, 1513, 1318, 1279, 117, 1150, 2533, 1754, 1120, 1103, 1924, 2659, 2932, 1107, 7120, 119, 1556, 1123, 1302, 21551, 6439, 1264, 137, 118, 137, 16195, 1121, 12556, 12805, 5326, 2822, 117, 4548, 117, 1131, 1281, 2284, 1107, 1103, 23994, 13362, 159, 11964, 2260, 1306, 1105, 1103, 23994, 13362, 159, 1545, 6087, 1306, 4278, 1107, 1120, 1103, 4191, 2271, 159, 1161, 112, 170, 1291, 13995, 1105, 1998, 15782, 1116, 2708, 1120, 2161, 14812, 22839, 1113, 1103, 13122, 3331, 119, 102, 101, 102, 101, 102, 101, 134, 141, 28200, 11936, 12253, 25021, 8127, 2145, 14967, 134, 102, 101, 102, 101, 141, 28200, 11936, 12253, 25021, 8127, 2145, 14967, 1110, 170, 2865, 3695, 1388, 1120, 144, 1197, 17176, 2511, 1233, 17945, 19610, 1107, 9773, 117, 4323, 119, 1135, 2923, 1104, 1126, 8246, 19681, 1709, 6158, 117, 170, 1526, 1402, 1105, 1126, 9287, 2854, 23444, 783, 144, 1197, 17176, 2511, 8897, 1424], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'special_tokens_mask': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
INFO:__main__:Sample 29184 of the training set: {'input_ids': [1104, 1300, 1904, 1105, 7423, 3071, 119, 1109, 1461, 112, 188, 4017, 1127, 1637, 1118, 13921, 138, 1197, 19977, 1161, 1105, 1103, 1390, 1108, 2766, 1118, 6242, 13428, 117, 1150, 1145, 1589, 1114, 138, 1197, 19977, 1161, 1113, 1117, 8281, 2362, 1312, 117, 17533, 8005, 119, 1109, 1461, 1108, 1666, 1118, 2499, 22104, 117, 4317, 6049, 1105, 13428, 1114, 2509, 1494, 1121, 138, 1197, 19977, 1161, 119, 6049, 1145, 1307, 1103, 3651, 1105, 6659, 7789, 1105, 1103, 2753, 132, 22104, 8630, 1103, 7505, 1105, 6316, 5349, 117, 1105, 3895, 8782, 1116, 1307, 1103, 3267, 119, 107, 154, 6592, 12640, 107, 1110, 1982, 1118, 138, 1197, 19977, 1161, 1114, 2509, 3582, 2172, 1121, 6242, 13428, 119, 1109, 1461, 1108, 1802, 1206, 1421, 8522, 1107, 4916, 131, 1109, 157, 12635, 7043, 117, 20984, 1116, 111, 14473, 1116, 117, 13784, 3982, 1324, 6125, 117, 1109, 15375, 9977, 140, 10587, 3464, 1105, 15375, 6935, 132, 1105, 1107, 8125, 21596, 2528, 5406, 1107, 2470, 1392, 119, 107, 154, 6592, 12640, 107, 1108, 3216, 1107, 6523, 4419, 4157, 5406, 1107, 1203, 1365, 1392, 1118, 10829, 144, 21536, 24493, 1182, 117, 1105, 20881, 1118, 20868, 1186, 11637, 2879, 1120, 1103, 8028, 1953, 5406, 1113, 1115, 1331, 119, 138, 1197, 19977, 1161, 6454, 1115, 107, 112, 154, 6592, 12640, 112, 1110, 1103, 1362, 1149, 1104, 1103, 2487, 1105, 1103, 3315, 1434, 1118, 9655, 119, 1135, 112, 188, 1103, 4438, 1106, 4835, 1103, 3507, 1137, 1106, 9353, 24921, 1112, 1126, 6171, 1104, 1185, 19760, 9037, 119, 112, 154, 6592, 12640, 112, 1110, 170, 1642, 1114, 1103, 10275, 1104, 1103, 7127, 117, 1110, 1103, 27836, 1104, 1343, 1150, 1322, 1146, 9207, 2041, 119, 107, 102, 101, 102, 101, 134, 134, 19697, 134, 134, 102, 101, 102, 101, 102, 101, 134, 134, 134, 1953, 1888, 134, 134, 134, 102, 101, 102, 101, 1109, 1390, 1888, 1111, 107, 154, 6592, 12640, 107, 1108, 5819, 1107, 5976, 6554, 117, 7263, 119, 1135, 1108, 2002, 1118, 3274, 12381, 117, 1150, 1145, 2002, 1103, 1390, 1888, 1111, 138, 1197, 19977, 1161, 112, 188, 1763, 1423, 107, 12556, 3174, 2572, 107, 117, 1121, 1117, 1312, 17533, 8005, 119, 1130, 1103, 1888, 117, 138, 1197, 19977, 1161, 1110, 1562, 3179, 1103, 4324, 1104, 1103, 1331, 117, 1112, 1218, 1112, 14210, 1104, 1103, 1331, 112, 188, 2275, 1105, 4204, 2041, 117, 1229, 4241, 1103, 4017, 1104, 1103, 1461, 119, 4112, 3265, 19810, 22429, 1103, 1888, 1112, 170, 107, 23160, 1104, 8468, 18886, 2093, 1105, 20203, 1104, 1103, 11835, 1179, 2483, 1107, 1155, 1157, 6228, 12887, 119, 107, 1249, 1104, 130, 1382, 1368, 117, 1103, 1888, 1144, 1680, 122, 137, 119, 137, 124, 1550, 4696, 1113, 7673, 119, 102, 101, 102, 101, 134, 134, 11444, 7276, 1158, 134, 134, 102, 101, 102, 101, 6082, 9133, 102, 101, 107, 154, 6592, 1424, 107, 782, 125, 131, 1479, 102, 101, 102, 101, 134, 134, 25085, 134, 134, 102, 101, 102, 101, 102, 101, 134, 134, 19821, 134, 134, 102, 101, 102, 101, 5055, 1179, 1121, 1103, 1312, 21757, 119, 102, 101, 102, 101, 134, 134, 17443, 1607, 134, 134, 102, 101, 102, 101], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'special_tokens_mask': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1]}.
INFO:__main__:Sample 6556 of the training set: {'input_ids': [101, 134, 134, 14789, 3420, 1988, 113, 10428, 3823, 782, 1765, 3823, 114, 134, 134, 102, 101, 102, 101, 1130, 170, 1965, 1227, 1112, 1103, 14789, 8931, 117, 2264, 17004, 21305, 21915, 2446, 1149, 170, 4644, 1104, 5851, 1104, 1103, 2264, 1764, 119, 1130, 10428, 3823, 117, 1155, 4037, 117, 8334, 1104, 1147, 6968, 1137, 1934, 1705, 117, 1127, 1189, 7408, 1111, 3990, 1154, 1103, 2264, 2306, 119, 1188, 1815, 4698, 3673, 1105, 4803, 170, 18258, 1965, 1115, 1125, 1151, 2898, 1111, 3944, 117, 1104, 9305, 2400, 5420, 1111, 1764, 1555, 119, 1109, 7762, 1206, 1144, 19756, 1182, 117, 185, 4854, 6617, 6633, 1105, 189, 3464, 2047, 1182, 117, 1134, 1125, 1640, 1561, 20611, 117, 1108, 3184, 2856, 117, 1105, 1103, 3420, 1988, 3113, 6404, 1104, 1927, 10405, 1108, 1687, 119, 8890, 3113, 6404, 1824, 170, 16358, 28008, 2049, 1104, 2302, 6404, 119, 1636, 3420, 1988, 5927, 1127, 3795, 1121, 7888, 4482, 132, 1118, 1142, 1159, 117, 2264, 1137, 2911, 9709, 1125, 1151, 2918, 1193, 3631, 1166, 1277, 1104, 2890, 2413, 1105, 140, 15630, 1233, 18351, 144, 18318, 119, 3935, 1200, 7888, 6404, 117, 1216, 1112, 1103, 1396, 21998, 1116, 1105, 174, 18276, 3052, 117, 1127, 2125, 1118, 1664, 137, 118, 137, 7888, 24544, 26502, 1115, 1180, 8296, 1104, 2880, 22222, 119, 4187, 1106, 1103, 6256, 1104, 1103, 7888, 3420, 5266, 1154, 170, 2049, 1104, 2302, 6404, 3352, 112, 188, 9099, 18520, 1113, 13817, 7915, 15708, 1116, 1111, 1619, 119, 1249, 170, 12394, 14176, 117, 3420, 5266, 1127, 1593, 1579, 4977, 1118, 1126, 4463, 1137, 3407, 1295, 1104, 9310, 13817, 2830, 117, 1134, 1127, 3795, 1121, 1103, 1664, 137, 118, 137, 4037, 1104, 1103, 2813, 112, 188, 6835, 119, 1448, 1227, 5856, 1104, 3420, 5266, 1217, 1824, 1121, 1664, 137, 118, 137, 7888, 7112, 1219, 1142, 1669, 1108, 1103, 3420, 1988, 1115, 1108, 2120, 1107, 1103, 3199, 1104, 26376, 10691, 119, 102, 101, 1258, 21915, 117, 1103, 3420, 5266, 1127, 3795, 3494, 1121, 8676, 4037, 1897, 1190, 4037, 14255, 1116, 13590, 1174, 1111, 4019, 119, 19340, 1338, 1977, 1105, 1127, 3134, 1136, 1121, 4037, 1104, 1103, 1331, 1104, 3352, 2111, 1133, 1121, 1103, 3376, 11408, 1105, 2964, 4281, 4058, 1223, 2264, 1654, 119, 21614, 1199, 1263, 137, 118, 137, 1858, 1764, 8799, 1127, 1705, 1174, 1112, 11461, 117, 1152, 1127, 25346, 1118, 9112, 1114, 2609, 1764, 2541, 1150, 1127, 1107, 2327, 1555, 3229, 1178, 1111, 170, 1374, 7827, 119, 1109, 3420, 5266, 1104, 1103, 1523, 2250, 1915, 117, 6199, 1103, 3420, 5266, 1104, 1103, 1224, 2813, 117, 8941, 2264, 1107, 4247, 117, 1780, 1199, 1353, 1295, 1104, 4252, 137, 118, 137, 13817, 2830, 1127, 1930, 4572, 119, 1109, 2306, 112, 188, 2299, 137, 118, 137, 1634, 3099, 1105, 12392, 1127, 1253, 3795, 7097, 1121, 1103, 2264, 26076, 119, 102, 101, 5472, 2206, 1107, 1103, 2250, 117, 3420, 1988, 5927, 1127, 1185, 2039, 2935, 1113, 170, 13286, 3142, 1106, 3244, 1147, 1657, 119, 3743, 117, 1152, 1460, 2530, 2653, 117, 1105, 1127, 4071, 1118, 1103, 1352, 1113, 170, 4275, 137, 118, 137, 1858, 3142, 119, 1249, 170, 9547, 117, 1764, 4019, 1310, 1106], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'special_tokens_mask': [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
https://huggingface.co/distilbert-base-cased/resolve/main/tf_model.h5 not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp_n559bms
Downloading: 100% 338M/338M [00:12<00:00, 29.1MB/s]
storing https://huggingface.co/distilbert-base-cased/resolve/main/tf_model.h5 in cache at /root/.cache/huggingface/transformers/fe773335fbb46b412a9093627b6c3235a69c55bad3bd1deee40813cd0a8d0a82.33c483181ffc4c7cbdd0b733245bcc9b479f14f3b2e892f635fe03f4f3a41495.h5
creating metadata file for /root/.cache/huggingface/transformers/fe773335fbb46b412a9093627b6c3235a69c55bad3bd1deee40813cd0a8d0a82.33c483181ffc4c7cbdd0b733245bcc9b479f14f3b2e892f635fe03f4f3a41495.h5
loading weights file https://huggingface.co/distilbert-base-cased/resolve/main/tf_model.h5 from cache at /root/.cache/huggingface/transformers/fe773335fbb46b412a9093627b6c3235a69c55bad3bd1deee40813cd0a8d0a82.33c483181ffc4c7cbdd0b733245bcc9b479f14f3b2e892f635fe03f4f3a41495.h5
2022-03-31 04:04:45.813436: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
Some layers from the model checkpoint at distilbert-base-cased were not used when initializing TFDistilBertForMaskedLM: ['activation_13']
- This IS expected if you are initializing TFDistilBertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFDistilBertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFDistilBertForMaskedLM were initialized from the model checkpoint at distilbert-base-cased.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFDistilBertForMaskedLM for predictions without further training.
No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! Please ensure your labels are passed as keys in the input dict so that they are accessible to the model during the forward pass. To disable this behaviour, please pass a loss argument, or explicitly pass loss=None if you do not want your model to compute a loss.
INFO:__main__:***** Running training *****
INFO:__main__: Num examples = 223820
INFO:__main__: Num Epochs = 3.0
INFO:__main__: Instantaneous batch size per device = 8
INFO:__main__: Total train batch size = 64
Epoch 1/3
Traceback (most recent call last):
File "run_mlm.py", line 562, in <module>
main()
File "run_mlm.py", line 537, in main
callbacks=[SavePretrainedCallback(output_dir=training_args.output_dir)],
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py", line 1191, in _numpy
raise core._status_to_exception(e) from None # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InternalError: 6 root error(s) found.
(0) INTERNAL: {{function_node __inference_train_function_33804}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1648699546.241659886","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3124,"referenced_errors":[{"created":"@1648699546.241659165","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/lib/transport/error_utils.cc","file_line":163,"grpc_status":14}]}
[[{{node StatefulPartitionedCall}}]]
[[MultiDeviceIteratorGetNextFromShard]]
Executing non-communication op <MultiDeviceIteratorGetNextFromShard> originally returned UnavailableError, and was replaced by InternalError to avoid invoking TF network error handling logic.
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[strided_slice_9/_270]]
(1) INTERNAL: {{function_node __inference_train_function_33804}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1648699546.241659886","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3124,"referenced_errors":[{"created":"@1648699546.241659165","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/lib/transport/error_utils.cc","file_line":163,"grpc_status":14}]}
[[{{node StatefulPartitionedCall}}]]
[[MultiDeviceIteratorGetNextFromShard]]
Executing non-communication op <MultiDeviceIteratorGetNextFromShard> originally returned UnavailableError, and was replaced by InternalError to avoid invoking TF network error handling logic.
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[strided_slice_24/_294]]
(2) INTERNAL: {{function_node __inference_train_function_33804}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1648699546.241659886","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3124,"referenced_errors":[{"created":"@1648699546.241659165","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/lib/transport/error_utils.cc","file_line":163,"grpc_status":14}]}
[[{{node StatefulPartitionedCall}}]]
[[MultiDeviceIteratorGetNextFromShard]]
Executing non-communication op <MultiDeviceIteratorGetNextFromShard> originally returned UnavailableError, and was replaced by InternalError to avoid invoking TF network error handling logic.
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[cond/pivot_t/_4/_91]]
(3) INTERNAL: {{function_node __inference_train_function_33804}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1648699546.241659886","description":"F ... [truncated]
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/context.py", line 2611, in async_wait
context().sync_executors()
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/context.py", line 694, in sync_executors
pywrap_tfe.TFE_ContextSyncExecutors(self._context_handle)
tensorflow.python.framework.errors_impl.InternalError: 6 root error(s) found.
(0) INTERNAL: {{function_node __inference_train_function_33804}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1648699546.241659886","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3124,"referenced_errors":[{"created":"@1648699546.241659165","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/lib/transport/error_utils.cc","file_line":163,"grpc_status":14}]}
[[{{node StatefulPartitionedCall}}]]
[[MultiDeviceIteratorGetNextFromShard]]
Executing non-communication op <MultiDeviceIteratorGetNextFromShard> originally returned UnavailableError, and was replaced by InternalError to avoid invoking TF network error handling logic.
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[strided_slice_9/_270]]
(1) INTERNAL: {{function_node __inference_train_function_33804}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1648699546.241659886","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3124,"referenced_errors":[{"created":"@1648699546.241659165","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/lib/transport/error_utils.cc","file_line":163,"grpc_status":14}]}
[[{{node StatefulPartitionedCall}}]]
[[MultiDeviceIteratorGetNextFromShard]]
Executing non-communication op <MultiDeviceIteratorGetNextFromShard> originally returned UnavailableError, and was replaced by InternalError to avoid invoking TF network error handling logic.
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[strided_slice_24/_294]]
(2) INTERNAL: {{function_node __inference_train_function_33804}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1648699546.241659886","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3124,"referenced_errors":[{"created":"@1648699546.241659165","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/lib/transport/error_utils.cc","file_line":163,"grpc_status":14}]}
[[{{node StatefulPartitionedCall}}]]
[[MultiDeviceIteratorGetNextFromShard]]
Executing non-communication op <MultiDeviceIteratorGetNextFromShard> originally returned UnavailableError, and was replaced by InternalError to avoid invoking TF network error handling logic.
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[cond/pivot_t/_4/_91]]
(3) INTERNAL: {{function_node __inference_train_function_33804}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1648699546.241659886","description":"F ... [truncated]
2022-03-31 04:05:46.781395: W ./tensorflow/core/distributed_runtime/eager/destroy_tensor_handle_node.h:57] Ignoring an error encountered when deleting remote tensors handles: INVALID_ARGUMENT: Unable to find the relevant tensor remote_handle: Op ID: 16279, Output num: 0
Additional GRPC error information from remote target /job:worker/replica:0/task:0:
:{"created":"@1648699546.777954132","description":"Error received from peer ipv4:10.44.170.98:8470","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Unable to find the relevant tensor remote_handle: Op ID: 16279, Output num: 0","grpc_status":3}
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Model could start training.
<!-- A clear and concise description of what you would expect to happen. -->
| 03-31-2022 06:20:13 | 03-31-2022 06:20:13 | Hi, we've noted this issue! We'll aim to investigate this next week - making sure we have stable and well-tested TPU support for TF, including for `generate()`, is on our near-term roadmap. I'll keep you updated!<|||||>Hi @ryanzlu . We've investigated the issue. This is caused by our `to_tf_dataset()` method streaming data from the underlying dataset in a way that does not work when connecting to remote TPU instances (which is what happens in Colab). We've tested and the script you're using works fine on a "TPU VM" instance on GCS, which allows a direct connection to a TPU.
Our expectation (or at least our hope) is that Google will move to TPU VMs as the main method for accessing TPUs, as the remote TPU node approach is very restrictive. As such, we're probably not going to implement specific methods for remote TPU nodes, as they may become obsolete. We may look into changing our example/notebook scripts to avoid errors on Colab, though, but in the meantime you could try either using a Colab GPU instance or a paid TPU VM instance on GCS.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,508 | closed | [Typo][Example] Fixed a typo in `run_qa_no_trainer.py` | # What does this PR do?
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [discussion PR link](https://github.com/huggingface/transformers/pull/11510#issuecomment-1083227744)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @stas00 | 03-31-2022 05:01:21 | 03-31-2022 05:01:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,507 | closed | Add Doc Test GPT-J | # What does this PR do?
Fixes the broken doc tests for GPT-J
Apart of the documentation sprint work.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github [issue] (https://github.com/huggingface/transformers/issues/16292)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ydshieh
@sgugger
-->
| 03-31-2022 03:20:30 | 03-31-2022 03:20:30 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ArEnSc
Thank you for working on GPT-J.
Maybe you can just put any expected value as empty string. Once other parts are done,
please ping me and I will try to get and put the expected values :)<|||||>> @ArEnSc
>
> Thank you for working on GPT-J. Maybe you can just put any expected value as empty string. Once other parts are done, please ping me and I will try to get and put the expected values :)
yep please do and let me know and we can close this =)<|||||>Hi, @ArEnSc
After discussing with the team, we found that there is no real checkpoints for finetuned GPT-J model on downstream tasks. (There is one for text seq. classification, but it is for Korean language).
If you still want to work on this GPT-J doctest, the best we could do for now is to use a tiny model (that is created for testing purpose)
https://huggingface.co/hf-internal-testing/tiny-random-gptj
With this one, there won't be any OOM issue. Let me know if you want to continue, and if so, don't hesitate if you have any issue using this tiny model checkpoint.
Thanks!<|||||>> Hi, @ArEnSc
>
> After discussing with the team, we found that there is no real checkpoints for finetuned GPT-J model on downstream tasks. (There is one for text seq. classification, but it is for Korean language).
>
> If you still want to work on this GPT-J doctest, the best we could do for now is to use a tiny model (that is created for testing purpose)
>
> https://huggingface.co/hf-internal-testing/tiny-random-gptj
>
> With this one, there won't be any OOM issue. Let me know if you want to continue, and if so, don't hesitate if you have any issue using this tiny model checkpoint.
>
> Thanks!
will do! Ill do this after work today<|||||>Hi, @ArEnSc
Thank you for the effort.
I need to investigate first why it is non deterministic, and see if there is a way to fix this.
We strongly prefer not to disable doctests for some parts in a model.
(Otherwise, our team need to discuss to justify it and make the decision).
I will take a look in this soon!<|||||>Hi, @ArEnSc
I uploaded the checkpoints
```
"ydshieh/tiny-random-gptj-for-sequence-classification"
"ydshieh/tiny-random-gptj-for-question-answering"
```
I tested with `tiny-random-gptj-for-question-answering` and the results are now deterministic. Could you use them please, and let me know if they work well :-)<|||||>@ydshieh I think we should be good to go here let me know =)
edit (# limitations under the License..) <-- ill fix this in a bit. Had to trigger CI some how<|||||>@ArEnSc Would you mind to remove the comments with (very) long error message - just easier to load this page :-) Thanks!<|||||>@ydshieh looks like it good to go!<|||||>@ydshieh I think this is done after CI passes =)<|||||>@ydshieh we are good to merge! =)<|||||>> @ydshieh we are good to merge! =)
Yes! Merged now.
Thanks a lot for working on this doctest for GPT-J, @ArEnSc 🚀 🎉 |
transformers | 16,506 | closed | Type hints added to Speech to Text | @Rocketknight1 Hints of types | 03-31-2022 01:12:23 | 03-31-2022 01:12:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for this! I made one quick change and I'll merge as soon as tests are green. |
transformers | 16,505 | closed | Type hints added for TFMobileBert | @Rocketknight1 hints(specifically of the type variety) added | 03-31-2022 00:06:00 | 03-31-2022 00:06:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,504 | closed | Script to convert huggingface models to deepspeed/megatron checkpoints | Do we have a script to convert the huggingface pretrained models on the server to deepspeed/megatron style model checkpoints? | 03-31-2022 00:00:38 | 03-31-2022 00:00:38 | We don't at the moment. You can probably try to reverse the existing conversion scripts.
@tjruwase (Deepspeed) has been working on doing various conversions between Meg-LM, Meg-DS and HF Transformers:
https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/239
so perhaps it'd fit there the best, but it's a slow going, as we keep asking Tunji to work on other things every time we try to go back to work on the checkpoints<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi there.
Just wondering any updates of the conversion from HF models to DS models.
Thanks! |
transformers | 16,503 | closed | Trainer: Support scheduler that reduce LR on loss plateau. | # 🚀 Feature request
The HuggingFace Trainer currently only supports learning rate schedulers where the learning rate follows a fixed trajectory. Schedulers that adapt the learning rate to the state of the training procedure are (as far as I know) not supported.
As en example, take [ `ReduceLROnPlateau`](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html) from `torch.optim.lr_scheduler`. This scheduler reduced the learning rate when a given metric is stuck on a plateau. In its `step` method, this scheduler requires an input. This input, typically the validation metric, is used to decide whether or not to decrease the learning rate.
This behaviour could be supported by giving the scheduler access to the training/validation metrics. This would most likely involve changing [this piece of code](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/trainer.py#L1468).
I believe that [`TrainerState`](https://huggingface.co/docs/transformers/v4.17.0/en/main_classes/callback#transformers.TrainerState) holds the training/validation metrics in its `log_history` attribute. If the scheduler would have access to `TrainerState`, it would be easy to create a 'stateful' scheduler.
## Motivation
This change would allow user to easily use more complex learning rate schedulers together with the `Trainer` class.
| 03-30-2022 19:36:05 | 03-30-2022 19:36:05 | Might be of interest to @sgugger <|||||>This is not a high-priority feature on our list, as no Transformers pretraining or fine-tuning is making use of such schedulers in the literature. If you want to make a PR with this, I'm happy to have a look though.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,502 | closed | Batch tensor creation error when finetuning gpt2 | Python: 3.7.6
Transformers: 4.17.0
Datasets: 2.0.0
Tokenizers: 0.11.6
Pytorch: 1.7.0 (no gpu)
OS: Pop!_OS 21.10
Not using GPU
Model: GPT2 for text generation (@patil-suraj, @patrickvonplaten, @LysandreJik, @narsil)
I have the following code for finetuning gpt2:
```python
import pandas as pd
import datasets
from transformers import GPT2Tokenizer, DataCollatorForLanguageModeling, GPT2LMHeadModel, TrainingArguments, Trainer
import numpy as np
ppl_metric = datasets.load_metric('perplexity')
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return ppl_metric.compute(predictions=predictions, references=labels)
sample_set = pd.read_csv('./data.csv', encoding='ISO-8859-1')
sample_ds = datasets.Dataset.from_pandas(sample_set['cleaned_spacy_stopped'].to_frame())
sample_ds = sample_ds.train_test_split(test_size=0.1)
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokenizer.pad_token = tokenizer.eos_token
def tokenize_data(examples):
return tokenizer([" ".join(x) for x in examples['cleaned_spacy_stopped']], padding=True)
tokenized_ds = sample_ds.map(tokenize_data,
print_str='sample_ds.map(tokenize_data)',
batched=True,
num_proc=4,
remove_columns=sample_ds['train'].column_names)
block_size = 256
def group_texts(examples):
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
lm_dataset = tokenized_ds.map(group_texts,
print_str='tokenized_ds.map(group_texts)',
batched=True,
num_proc=4,
remove_columns=tokenized_ds['train'].column_names)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
model = GPT2LMHeadModel.from_pretrained('gpt2')
training_args = TrainingArguments(
output_dir='./models',
evaluation_strategy='epoch',
report_to='wandb'
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=lm_dataset['train'],
eval_dataset=lm_dataset['test'],
compute_metrics=compute_metrics,
data_collator=data_collator
)
trainer.train()
```
and I get the following error:
```python
/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
FutureWarning,
***** Running training *****
Num examples = 384
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 144
Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
2022-03-24 17:42:58.679514: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2022-03-24 17:42:58.679545: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
wandb: Tracking run with wandb version 0.12.11
wandb: W&B syncing is set to `offline` in this directory.
wandb: Run `wandb online` or set WANDB_MODE=online to enable cloud syncing.
3%|████▎ | 5/144 [00:54<26:12, 11.31s/it]Traceback (most recent call last):
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 708, in convert_to_tensors
tensor = as_tensor(value)
ValueError: expected sequence of length 256 at dim 1 (got 65)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_finetuning.py", line 98, in <module>
mt.time_func(trainer.train, print_str='train.train()')
File "/home/aclifton/gpt2_dm/method_timer.py", line 9, in wrapper_timer
value, str_to_print = func(*args, **kwargs)
File "/home/aclifton/gpt2_dm/method_timer.py", line 26, in time_func
output = f(*args, **kwargs)
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1374, in train
for step, inputs in enumerate(epoch_iterator):
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py", line 41, in __call__
return self.torch_call(features)
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py", line 729, in torch_call
batch = self.tokenizer.pad(examples, return_tensors="pt", pad_to_multiple_of=self.pad_to_multiple_of)
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2862, in pad
return BatchEncoding(batch_outputs, tensor_type=return_tensors)
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 213, in __init__
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 725, in convert_to_tensors
"Unable to create tensor, you should probably activate truncation and/or padding "
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
wandb: Waiting for W&B process to finish... (failed 1).
wandb:
wandb: You can sync this run to the cloud by running:
wandb: wandb sync /home/aclifton/gpt2_dm/wandb/offline-run-20220324_174257-k8vydnze
wandb: Find logs at: ./wandb/offline-run-20220324_174257-k8vydnze/logs
```
I tried adding `truncation=True` and got the same thing. I was also originally following the documentation [here](https://huggingface.co/docs/transformers/tasks/language_modeling) for dynamic padding using the `DataCollatorForLanguageModeling` and get the same error.
Any thoughts about what I might be doing wrong? Thanks in advance! I’d be interested in using dynamic padding if possible.
| 03-30-2022 19:34:06 | 03-30-2022 19:34:06 | Any ideas?<|||||>Cannot tell you the full detail since it's an involved example but it seems the root error is:
```
ValueError: expected sequence of length 256 at dim 1 (got 65)
```
That looks to me as if the `tensor` is a pure python object with lists of different sizes, so you should probably try to investigate in this way.
When the lengths are different they cannot be casted to tensors, you need to pad them with `pad_token`.
There are many ways to do that in the library.
One way to investigate is to put yourself where the error occurs (/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 708, i) and include something like `import ipdb; ipdb.set_trace()` so you can investigate what is going on
May I also suggest using https://discuss.huggingface.co/ for such questions ?
Cheers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,501 | closed | [examples] max samples can't be bigger than the len of dataset | Starting with `datasets==1.18.4` an exception is raised when `ds.select(myrange)` is called and `myrange` includes indices larger than the length of the dataset. This impacts all our examples, e.g.:
```
stderr: Traceback (most recent call last):
stderr: File "/mnt/nvme0/code/huggingface/transformers-master/examples/pytorch/translation/run_translation.py", line 624, in <module>
stderr: main()
stderr: File "/mnt/nvme0/code/huggingface/transformers-master/examples/pytorch/translation/run_translation.py", line 436, in main
stderr: train_dataset = train_dataset.select(range(data_args.max_train_samples))
stderr: File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 486, in wrapper
stderr: out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
stderr: File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/fingerprint.py", line 458, in wrapper
stderr: out = func(self, *args, **kwargs)
stderr: File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2601, in select
stderr: _check_valid_indices_value(int(max(indices)), size=size)
stderr: File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 573, in _check_valid_indices_value
stderr: raise IndexError(
stderr: IndexError: Invalid value 15 in indices iterable. All values must be within range [-11, 10].
```
This PR is trying to fix the issue across all pytorch examples with:
```
find examples -type f -name "*.py" -exec perl -pi -e 's|^(\s+)(eval_dataset = eval_dataset.select.range.data_args.max_eval_samples..)|$1max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples)\n$1eval_dataset = eval_dataset.select(range(max_eval_samples))|' {} \;
find examples -type f -name "*.py" -exec perl -pi -e 's|^(\s+)(eval_examples = eval_examples.select.range.data_args.max_eval_samples..)|$1max_eval_samples = min(len(eval_examples), data_args.max_eval_samples)\n$1eval_examples = eval_examples.select(range(max_eval_samples))|' {} \;
find examples -type f -name "*.py" -exec perl -pi -e 's|^(\s+)(predict_dataset = predict_dataset.select.range.data_args.max_predict_samples..)|$1max_predict_samples = min(len(predict_dataset), data_args.max_predict_samples)\n$1predict_dataset = predict_dataset.select(range(max_predict_samples))|' {} \;
find examples -type f -name "*.py" -exec perl -pi -e 's|^(\s+)(test_dataset = test_dataset.select.range.data_args.max_eval_samples..)|$1max_eval_samples = min(len(test_dataset), data_args.max_eval_samples)\n$1test_dataset = test_dataset.select(range(max_eval_samples))|' {} \;
find examples -type f -name "*.py" -exec perl -pi -e 's|^(\s+)(train_dataset = train_dataset.select.range.data_args.max_train_samples..)|$1max_train_samples = min(len(train_dataset), data_args.max_train_samples)\n$1train_dataset = train_dataset.select(range(max_train_samples))|' {} \;
```
I may have missed some cases, but this should cover most of it.
This PR only adjusts the pytorch examples.
@sgugger | 03-30-2022 18:03:53 | 03-30-2022 18:03:53 | _The documentation is not available anymore as the PR was closed or merged._<|||||>sure, replayed for tf/flax as well.<|||||>Very nice of you, thanks a lot!
You caught some research projects in the change, but it's more defensive, so fine to merge. |
transformers | 16,500 | closed | added type hints to xglm pytorch | # What does this PR do?
I added type annotations for xglm (PT) as described in [https://github.com/huggingface/transformers/issues/16059]
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1 @patrickvonplaten @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | 03-30-2022 17:40:52 | 03-30-2022 17:40:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,499 | closed | TF: Finalize `unpack_inputs`-related changes | # What does this PR do?
Closes https://github.com/huggingface/transformers/issues/16051
Please read this before diving into the changes :) This PR finalizes the changes related to the `unpack_inputs` and is slightly more complex than the other PRs.
Changes:
1. Removes `**kwargs` from most `call` methods in our TF models:
1. This argument was not documented and, after adding the decorator, not being used;
2. It was used before exclusively as an input to `input_processing`, to handle some special cases (which are now handled inside the decorator);
3. The exception is the `encoder_decoder` models (see below);
4. **Removing it implies that there are no more hidden parameters being passed to the models**. Somewhat expected, a few tests were passing unused parameters, and had to be corrected. I've added comments throughout the PR to elaborate here.
2. Replaces the use of `input_processing` by the decorator in the `encoder_decoder` models:
1. This was not a 1:1 change, like in the other models -- `input_processing` was being used before the `encoder` and the `decoder` were called, which was redundant (the `encoder`/`decoder` now have the decorator, which also calls the function);
2. However, it was also being used for its side effects, i.e. to set some variables (like `use_cache`), which is equivalent to adding the decorator on the `encoder_decoder` model;
3. Because these `encoder_decoder` models *must* use kwags, as the `encoder`/`decoder` might have a myriad of arguments, the decorator was updated so as to allow random kwargs on models that expect them. This brings us back to 1. -- no other models have kwags now.
3. Icing on the cake -- `input_processing` is now only used in the decorator, so I made the function protected :) This means we can start modernizing it without the fear of it being used in other places.
| 03-30-2022 16:48:13 | 03-30-2022 16:48:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,498 | closed | Add ONNX export for BeiT | # What does this PR do?
Based on the fantastic work by @lewtun in https://github.com/huggingface/transformers/pull/15658, I added ONNX export support for the Microsoft BeiT model.
# Usage
```
import requests
import numpy as np
from PIL import Image
from onnxruntime import InferenceSession
from transformers import AutoConfig, AutoFeatureExtractor, AutoModelForImageClassification
# Export ViT checkpoint with image classification head
model_ckpt = "microsoft/beit-base-patch16-224"
!python -m transformers.onnx --model={model_ckpt} --feature=image-classification onnx/
# Download an image of two cute cats - naturally ;-)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# Instantiate config and feature extractor
config = AutoConfig.from_pretrained(model_ckpt)
feature_extractor = AutoFeatureExtractor.from_pretrained(model_ckpt)
inputs = feature_extractor(image, return_tensors="np")
# Create ONNX Runtime session
session = InferenceSession("onnx/model.onnx", providers=["CPUExecutionProvider"])
outputs = session.run(["logits"], dict(inputs))
predicted_class_idx = np.argmax(outputs[0])
# Returns Predicted class: Egyptian cat
print("Predicted class:", config.id2label[predicted_class_idx])
```
| 03-30-2022 16:30:32 | 03-30-2022 16:30:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR, super cool! |
transformers | 16,497 | open | [TODO] Investigate equivalence tests | **(add a lot of assignees just to make you informed and kept updated in the future. Don't hesitate to remove yourself if you think it's irrelevant)**
Currently the PT/TF/Flax equivalence tests use `1e-5` as the tolerance for the absolute differences of outputs.
We see that these tests failed with a non-negligible (although not carefully defined) frequency.
Create this page to track a list of models to investigate.
- **FlaxWav2Vec2ModelTest** (2.2888184e-05 > 1e-5)
- https://app.circleci.com/pipelines/github/huggingface/transformers/37363/workflows/a4b06424-0ba8-4fbc-9054-6ff52fbf8145/jobs/411654
- **TFGPT2EncoderDecoderModelTest** (0.001009281724691391 > 1e-3)
- https://app.circleci.com/pipelines/github/huggingface/transformers/37358/workflows/43c12161-33d8-4df5-ba3c-3e62a4507ee7/jobs/411579
- This also happens to **TFBERTEncoderDecoderModelTest**
- This is caused by some sequence in a batch which gets all 0s as attention mask (generated by ids_tensor) - may happens on both encoder and decoder (especially after combining with the causal mask).
- For **TFBERTEncoderDecoderModelTest**, the difference is smaller than *TFGPT2EncoderDecoderModelTest* (by a magnitude of 5x~10x) -> this is due to the last hidden states in GPT2 is after layer norm (not the case for BERT).
- If we look the cross attention diff between PT/TF, it is clear that we have the same issue (both in the magnitude of `1e-3`)
- The encoder attention diff between PT/TF is in the magnitude of `5e-8`: ~~**not very sure why this doesn't get much larger**~~.
- This is because PT/TF (at least in BERT) has different `encoder_extended_attention_mask`: `1e-4` vs `1e-9`.
- **TFViTMAEModelTest** (1.013279e-05 > 1e-5)
- https://app.circleci.com/pipelines/github/huggingface/transformers/37319/workflows/5adfba7a-d12b-4e1e-9a7a-e33c7d5fd6ee/jobs/411002 | 03-30-2022 15:50:36 | 03-30-2022 15:50:36 | Another one to add to this list: `tests/funnel/test_modeling_funnel.py::FunnelModelTest::test_pt_tf_model_equivalence`. I've been getting a failure in this one every other day -- example: https://app.circleci.com/pipelines/github/huggingface/transformers/38007/workflows/2a98b7b1-5ad0-4b80-a702-1887c620193f/jobs/421265<|||||>> Another one to add to this list: `tests/funnel/test_modeling_funnel.py::FunnelModelTest::test_pt_tf_model_equivalence`. I've been getting a failure in this one every other day -- example: https://app.circleci.com/pipelines/github/huggingface/transformers/38007/workflows/2a98b7b1-5ad0-4b80-a702-1887c620193f/jobs/421265
Thanks. @stas00 also reported this. I will take a look~<|||||>> Another one to add to this list: `tests/funnel/test_modeling_funnel.py::FunnelModelTest::test_pt_tf_model_equivalence`. I've been getting a failure in this one every other day -- example: https://app.circleci.com/pipelines/github/huggingface/transformers/38007/workflows/2a98b7b1-5ad0-4b80-a702-1887c620193f/jobs/421265
(just for the record) Among `500` runs:
- 34 runs have `FunnelForMaskedLM.output.logits` at around `1e-5` ~ `2e-5`: so ~ `6.8%` chance of failure 😢
- 66 runs at around `9e-6`
- 38 runs at around `8e-6`
(so > 25% to get close to `1e-5`)<|||||>@ydshieh I believe you can add the `WIP` label to stop the bot :)<|||||>>
I am afraid I will completely forget this issue. But if this brother you guys, OK for me. Thanks for the tip, I didn't know about it |
transformers | 16,496 | closed | Support reduce_bucket_size="auto" for deepspeed stages <3 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16495
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00
| 03-30-2022 15:45:57 | 03-30-2022 15:45:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,495 | closed | `Trainer` does not support deepspeed `reduce_bucket_size="auto"` for DS levels <3 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.6
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
This issue is about the `deepspeed` integration, so @stas00 is the most suitable person.
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
DeepSpeed allows users to configure the `reduce_bucket_size` (see https://www.deepspeed.ai/docs/config-json/). `transformers` supports setting `reduce_bucket_size="auto"` for DeepSpeed level 3 (see the many examples at https://huggingface.co/docs/transformers/main/en/main_classes/deepspeed#zero3-config).
However, there is no reason why `transformers` would support `"auto"` only for level 3. At the moment, setting `reduce_bucket_size="auto"` in a level 2 (or lower) DS configuration leads to a RuntimeError as `deepspeed` expects integers.
## To reproduce
Steps to reproduce the behavior:
1. Create a `deepspeed` configuration with `"zero_optimization"`'s `"stage"` parameter set to less than `3` and `reduce_bucket_size="auto"`
2. Runs using the `Trainer` class do not replace the `"auto"` value and `deepspeed` throws an error.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Support `"auto"` for all `deepspeed` levels.
<!-- A clear and concise description of what you would expect to happen. -->
| 03-30-2022 15:42:04 | 03-30-2022 15:42:04 | |
transformers | 16,494 | closed | add code samples for TF speech models | # What does this PR do?
Add:
- TF_SPEECH_BASE_MODEL_SAMPLE
- TF_SPEECH_CTC_SAMPLE
I tested them manually with TFWav2Vec2 (not through doctest).
## Remark:
The following samples are not translated to TF, since I couldn't find models like `TFWav2Vec2ForSequenceClassification`.
- PT_SPEECH_SEQ_CLASS_SAMPLE,
- PT_SPEECH_FRAME_CLASS_SAMPLE,
- PT_SPEECH_XVECTOR_SAMPLE, | 03-30-2022 15:16:50 | 03-30-2022 15:16:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,493 | closed | Add length to PreTrainedTokenizer train_new_from_iterator | # What does this PR do?
Adds better progress to PreTrainedTokenizerFast training from iterator
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. *No*
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger | 03-30-2022 15:16:48 | 03-30-2022 15:16:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,492 | closed | Add support for exporting GPT-J to ONNX-TRT | This change enables exporting GPT-J to ONNX without having any `Sequence*` ops which are currently unsupported in TensorRT ONNX parser. It works by using a sequence of "`view`, `repeat`, `view`" instead of using `repeat_interleave`.
# What does this PR do?
Fixes #15640
## Who can review?
@patil-suraj | 03-30-2022 14:24:20 | 03-30-2022 14:24:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Added a function to handle the repeat, commenting on each step. |
transformers | 16,491 | closed | TF: unpack inputs on Convbert, GPTJ, LED, and templates | # What does this PR do?
This PR implements the `@unpack_inputs` decorator on Convbert, GPTJ, LED, and templates, as per https://github.com/huggingface/transformers/issues/16051.
The diff is big, but the changes are all trivial:
1. Replace `input_processing()` calls by the `@unpack_inputs` decorator
2. Replace `inputs["foo"]` by `foo` (thank god regex exists 🙏 )
3. (SLOW tests were run locally, all passing)
Only one PR remains to close the issue linked above -- implement the change in the encoder_decoder architectures. I've decided to create a separate PR for that one as the changes are not as trivial, and will require a review from a core maintainer. | 03-30-2022 11:01:44 | 03-30-2022 11:01:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,490 | closed | make tuple annotation more specific to avoid failures during symbolic_trace | This pr fixes the failure appeared during `symbolic_trace due` to the missing annotation of Tuple.
Before this pr, if we symbolic trace the bert model like:
```
from transformers import BertTokenizer, BertModel
from transformers.utils.fx import symbolic_trace
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
symbolic_trace_model = symbolic_trace(model, input_names=["input_ids", "attention_mask", "token_type_ids"])
```
We would get:
```
exec(compile(src, key, 'exec'), globals)
File "<eval_with_key>.0", line 4
def forward(self, input_ids : typing_Union[torch.Tensor,NoneType], attention_mask : typing_Union[torch.Tensor,NoneType], token_type_ids : typing_Union[torch.Tensor,NoneType]) -> typing_Union[typing_Tuple[],transformers_modeling_outputs_BaseModelOutputWithPoolingAndCrossAttentions]:
^
SyntaxError: invalid syntax
```
After this pr, we could successfully symbolic trace the bert model. | 03-30-2022 07:30:30 | 03-30-2022 07:30:30 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@chenbohua3 We're happy to merge this, please let us know whenever you're ready!<|||||>@Rocketknight1 Thank you very much. I'm also happy to merge this pr. Should I do anything else? Since this is my first pr to transformers repo:)<|||||>@chenbohua3 No, this looks good! We just like to check that contributors aren't planning to add anything else before we merge. Thank you for a great PR! |
transformers | 16,489 | closed | segformer variants | Dear @NielsRogge,
Thanks for re-implementing the `segformer`. I have two questions:
1. The default setting is `segformer-b0`. If I want to use `segformer-b3`, I only need to modify the two parameters: `depths` and `hidden_sizes`, right?
2. I want to use the pre-trained model for small images (256x256), how should I use the `SegformerModel`?
Any comments would be highly appreciated:)
Best regards,
Jun
| 03-30-2022 02:28:46 | 03-30-2022 02:28:46 | Hi,
Thanks for your interest in SegFormer! If you want to use the pre-trained model, you can load any checkpoint from the [hub](https://huggingface.co/models?other=segformer), like so:
```
from transformers import SegformerForSemanticSegmentation
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/mit-b3", num_labels=10)
```
This will load only the pre-trained encoder (Mix Transformer or mit), b3-sized, with a randomly initialized head.
For more details on how to fine-tune SegFormer on your custom data, check out the following resources:
* notebook: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SegFormer/Fine_tune_SegFormer_on_custom_dataset.ipynb
* blog post (with corresponding notebook): https://huggingface.co/blog/fine-tune-segformer. This one leverages the [Trainer API](https://huggingface.co/docs/transformers/main_classes/trainer).
Regarding the differences between the sizes, [here](https://github.com/huggingface/transformers/blob/277d49a590b6745ec82460eea3f33a825a89051c/src/transformers/models/segformer/convert_segformer_original_to_pytorch.py#L158-L178) you can see the differences.<|||||>Hi @NielsRogge ,
Thanks for your guidance very much:) |
transformers | 16,488 | closed | TF GPT-J Type hints and TF decorator | @Rocketknight1 | 03-30-2022 00:00:59 | 03-30-2022 00:00:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looks great, thank you! |
transformers | 16,487 | closed | Do not initialize `torch.distributed` process group if one is already initailized | # What does this PR do?
If a `torch.distributed` process group is already initialized, `TrainingArguments` will not initialize one by itself. This allows for the user to create their own custom process groups if needed.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-29-2022 21:55:28 | 03-29-2022 21:55:28 | cc @sgugger <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,486 | closed | MarianMT: doubling batch_size has no effect on time taken | ## Environment info
- `transformers` version: 4.15.0
- Platform: Linux-5.4.0-97-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.6.13
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): 2.6.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): MarianMTModel, MarianTokenizer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [Y] my own modified scripts: (give details below)
**Problem: Doubling the batch size does not improve the time taken to do backtranslation with MarianMTModel at all.**
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [Y] my own task or dataset: (give details below)
I am trying to do back translation for a given set of tweets (my own collection) using MarianMTModel and MarianTokenizer. With `batch_size=128`, it shows 1 hour with 4 s/it on A6000 GPU, whereas with `batch_size=256`, it still shows 1 hour with 7.7-8 s/it. FYI, I do see the steps halved using `tqdm` bar.
## To reproduce
Steps to reproduce the behavior:
I am using the below script:
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# English to Romance languages
target_model_name = 'Helsinki-NLP/opus-mt-en-ROMANCE'
target_tokenizer = MarianTokenizer.from_pretrained(target_model_name)
target_model = MarianMTModel.from_pretrained(target_model_name).cuda()
# Romance languages to English
en_model_name = 'Helsinki-NLP/opus-mt-ROMANCE-en'
en_tokenizer = MarianTokenizer.from_pretrained(en_model_name)
en_model = MarianMTModel.from_pretrained(en_model_name).cuda()
def translate(texts, model, tokenizer, language="fr", num_beams=1):
template = lambda text: f"{text}" if language == "en" else f">>{language}<< {text}"
src_texts = [template(text) for text in texts]
encoded = tokenizer.prepare_seq2seq_batch(src_texts, return_tensors='pt').to(device)
translated = model.generate(**encoded, do_sample=True, max_length=256, top_k=0, num_beams=1, temperature=0.7)
translated_texts = tokenizer.batch_decode(translated, skip_special_tokens=True)
return translated_texts
def back_translate(texts, source_lang="en", target_lang="fr", num_beams=1):
fr_texts = translate(texts, target_model, target_tokenizer, language=target_lang, num_beams=num_beams)
back_translated_texts = translate(fr_texts, en_model, en_tokenizer, language=source_lang, num_beams=num_beams)
return back_translated_texts
batch_size = 256 # 128
for i in tqdm(range(0, len(df1), batch_size)):
rows = df1[i:i+batch_size]
aug_text = back_translate(rows['Tweet'].tolist(), source_lang="en", target_lang="fr",num_beams=1)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
With double the batch_size, time should reduce to half.
| 03-29-2022 21:25:32 | 03-29-2022 21:25:32 | @patil-suraj Any updates on this issue?<|||||>Hey @kgarg8,
Why should doubling the batch size reduce the time in half exactly. Time measurements strongly depend on the hardware that is being used. What hardware do you use here?<|||||>Well, my GPU can support double the batch size. So, I can process double the batch size in one go. I am using A6000 (49GB).<|||||>Doubling the batch size should indeed reduce the time but it won't be exactly half. The GPU requires more time when processing larger batches.
It's difficult to double check your scripts since `df1` is not defined. Could you maybe make a **short**, **reproducible** code snippet that we could copy-paste into a Python shell to run it?
Thanks!<|||||>**Minimal reproducible code:**
```
import transformers, torch, pandas as pd
from tqdm import tqdm
from transformers import MarianMTModel, MarianTokenizer
batch_size = 256 # 128
filename = 'sample_50k.csv'
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
target_model_name = 'Helsinki-NLP/opus-mt-en-ROMANCE'
target_tokenizer = MarianTokenizer.from_pretrained(target_model_name)
target_model = MarianMTModel.from_pretrained(target_model_name).cuda()
en_model_name = 'Helsinki-NLP/opus-mt-ROMANCE-en'
en_tokenizer = MarianTokenizer.from_pretrained(en_model_name)
en_model = MarianMTModel.from_pretrained(en_model_name).cuda()
def translate(texts, model, tokenizer, language="fr"):
template = lambda text: f"{text}" if language == "en" else f">>{language}<< {text}"
src_texts = [template(text) for text in texts]
encoded = tokenizer.prepare_seq2seq_batch(src_texts, return_tensors='pt').to(device)
translated = model.generate(**encoded, do_sample=True, max_length=256, top_k=0, num_beams=1, temperature=0.7)
translated_texts = tokenizer.batch_decode(translated, skip_special_tokens=True)
return translated_texts
def back_translate(texts, source_lang="en", target_lang="fr"):
fr_texts = translate(texts, target_model, target_tokenizer, language=target_lang)
back_translated_texts = translate(fr_texts, en_model, en_tokenizer, language=source_lang)
return back_translated_texts
df1 = pd.read_csv(filename, usecols=[0, 1], encoding='ISO-8859-1')
df1.columns = ['Tweet', 'Target']
for i in tqdm(range(0, len(df1), batch_size)):
rows = df1[i:i+batch_size]
aug_text = back_translate(rows['Tweet'].tolist(), source_lang="en", target_lang="fr")
```
**Data:**
Download `sample_50k.csv` from [temp link](https://www.dropbox.com/t/OFlemA3vZHhO9vJl).
**Observations:**
`batch_size=256`

`batch_size=128`

So, with double the batch size, the processing becomes very slow (compare 4.37 s/it vs. 7.18 s/it) and the overall time is almost similar (24 mins vs. 28 mins).
Whereas with smaller batch sizes, we save around one-third the time (90 secs vs. 130 secs) with double the batch size.
`batch_size=32`

`batch_size=16`

In my usecase, I have more than a million samples. With higher batch sizes like 256-1024, I practically don't save much time compared to batch_size of 128. I guess this tradeoff has to be judged based on multiple trials. And I am wondering if this applies to all Huggingface models.<|||||>From the code-snippet, it looks like the GPU is under-utilised here. Because while the text is being encoded, the GPU is just sitting idle. Maybe try to use a `Dataset/DataLoader` here to prepare the batches in the background while the model is translating other batches, so you can take advantage of async execution.
you could also try `pipeline` batching: https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,485 | closed | Add global_attention_mask to gen_kwargs in Seq2SeqTrainer.prediction_step | # What does this PR do?
Certain Seq2Seq models (e.g. [LED](https://huggingface.co/allenai/led-base-16384)-based models such as [PRIMERA](https://huggingface.co/allenai/PRIMERA)) need to pass the `global_attention_mask` to `model.generate()` so that global attention is computed for particular tokens when decoding. This does not currently happen in `Seq2SeqTrainer`, but can easily be added by looking for `global_attention_mask` in the provided inputs, and adding them to `gen_kwargs`, much the same way as the regular `attention_mask` is currently handled. This PR does exactly that.
__Other changes__
- Fixed a small typo in one of the comments in `transformers/src/transformers/trainer_seq2seq.py`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger, @patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-29-2022 20:34:37 | 03-29-2022 20:34:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Not expert enough in `generate` to review this and @patrickvonplaten is on vacation, so waiting for @patil-suraj review :-)
Thanks a lot for your PR!<|||||>Hi @JohnGiorgi and @patrickvonplaten,
using `model.generate(...)` the model doesn't receive global_attention_mask anyway for me, I think it would be appropriate to change also the `LEDForConditionalGeneration.prepare_inputs_for_generation(...)` method ([here](https://github.com/huggingface/transformers/blob/ae6a7a763be45d5b4fadaf32c1e09f3bb03408b5/src/transformers/models/led/modeling_led.py#L2419)) by adding the support for the global_attention_mask, with something like:
```
def prepare_inputs_for_generation(
self,
decoder_input_ids,
past=None,
attention_mask=None,
global_attention_mask=None, ### ADDED ###
head_mask=None,
decoder_head_mask=None,
cross_attn_head_mask=None,
use_cache=None,
encoder_outputs=None,
**kwargs,
):
# cut decoder_input_ids if past is used
if past is not None:
decoder_input_ids = decoder_input_ids[:, -1:]
return {
"input_ids": None, # encoder_outputs is defined. input_ids not needed
"encoder_outputs": encoder_outputs,
"past_key_values": past,
"decoder_input_ids": decoder_input_ids,
"attention_mask": attention_mask,
"global_attention_mask": global_attention_mask, ### ADDED ###
"head_mask": head_mask,
"decoder_head_mask": decoder_head_mask,
"cross_attn_head_mask": cross_attn_head_mask,
"use_cache": use_cache, # change this to avoid caching (presumably for debugging)
}
```
Just in case this is correct, should I open a new pull request for this? Thanks
<|||||>Good point @caesar-one ! Yes, it would be nice if you could open a new PR for this |
transformers | 16,484 | closed | Cannot save TFDebertaV2ForSequenceClassification as SavedModel via saved_model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyTorch version (GPU?): 1.10.0+cu111 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@LysandreJik
Models:
- DeBERTa-v2:
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): `kamalkraj/deberta-v2-xlarge`
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load model via `TFDebertaV2ForSequenceClassification`
2. Use `saved_model=True` to save as TensorFlow SavedModel
Reference: https://huggingface.co/docs/transformers/model_doc/deberta-v2#transformers.TFDebertaV2ForSequenceClassification
```python
from transformers import DebertaV2Tokenizer, TFDebertaV2ForSequenceClassification
import tensorflow as tf
tokenizer = DebertaV2Tokenizer.from_pretrained("kamalkraj/deberta-v2-xlarge")
model = TFDebertaV2ForSequenceClassification.from_pretrained("kamalkraj/deberta-v2-xlarge")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
outputs = model(inputs)
loss = outputs.loss
logits = outputs.logits
model.save_pretrained("kamalkraj/deberta-v2-xlarge", saved_model=True)
````
```sh
---------------------------------------------------------------------------
OperatorNotAllowedInGraphError Traceback (most recent call last)
[<ipython-input-4-7b1af514d387>](https://localhost:8080/#) in <module>()
----> 1 model.save_pretrained("kamalkraj/deberta-v2-xlarge", saved_model=True)
3 frames
[/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py](https://localhost:8080/#) in save_pretrained(self, save_directory, saved_model, version, push_to_hub, **kwargs)
1375 if saved_model:
1376 saved_model_dir = os.path.join(save_directory, "saved_model", str(version))
-> 1377 self.save(saved_model_dir, include_optimizer=False, signatures=self.serving)
1378 logger.info(f"Saved model created in {saved_model_dir}")
1379
[/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
[/usr/lib/python3.7/contextlib.py](https://localhost:8080/#) in __exit__(self, type, value, traceback)
117 if type is None:
118 try:
--> 119 next(self.gen)
120 except StopIteration:
121 return False
[/usr/local/lib/python3.7/dist-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py](https://localhost:8080/#) in call(self, inputs, training)
141
142 def call(self, inputs: tf.Tensor, training: tf.Tensor = False):
--> 143 if training and self.drop_prob > 0:
144 return TFDebertaV2XDropout(inputs, self.drop_prob)
145 return inputs
OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It is expected to save `TFDebertaV2ForSequenceClassification` models as TensorFlow SavedModel similar to `TFDebertaV2Model` models | 03-29-2022 19:47:20 | 03-29-2022 19:47:20 | I've reproduced this issue - will discuss with the team what we can do to generally support SavedModel saving.<|||||>Hi @maziyarpanahi ! I've talked this over with the team and although we offer `SavedModel` support for saving, it doesn't work with all models and we're not sure how possible it'll be to update all of them in the near future.
Can we ask what your use case for `SavedModel` is, compared to just `save_pretrained` or `save_weights`? There may be another approach.<|||||>Hi @Rocketknight1
The use case is to serve the fine-tuned (or already uploaded model) in TensorFlow. The SavedModel format is the only way to avoid going from PyTorch to onnx-tf and then to TensorFlow.
There are some architectures that don't have any TF support which I understand and normally either wait or go through ONNX to TF. However, DebertaV2 supports `saved_model` for the fill-mask and ForTokenClassification already. So I really thought this could be a bug if it only fails in DebertaV2ForSequenceClassification.<|||||>After some investigation, the cause is the different `Dropout` being used. In the `TokenClassification` model, standard Keras `Dropout` is used. In the `SequenceClassification` model, `StableDropout` is used. This change is present in the original PyTorch models too, although I'm not sure why.
I don't think this is a bug with an easy fix, unfortunately - I'm not the model author so I don't want to change the Dropout type. However, you could probably make a local fork of `transformers` and swap the `StableDropout` for `Dropout`, which would allow you to save the model as `SavedModel`. I'll talk to the other team members and see what they think!<|||||>Thanks @Rocketknight1
This is a great help! I will make that change and try to fine-tune a base model on IMDB to see whether I can save it as a SavedModel and also share the stats just in case for quality control. <|||||>Hi @Rocketknight1
For future discussions, I have replaced StableDropout with Dropout, the issue was resolved in saving as SavedModel. Also, the eval from 3-4 trained models on IMDB showed me no difference between StableDropout and Dropout. So there are no tradeoffs when it comes to performance.
I can prepare a PR if you have decided to use Keras Dropout inside `TFDebertaV2ForSequenceClassification`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,483 | closed | Raise diff tolerance value for TFViTMAEModelTest | # What does this PR do?
For now, change the tol from `1e-5` to `2e-5` in `TFViTMAEModelTest.test_pt_tf_model_equivalence` to avoid being flaky. | 03-29-2022 19:26:31 | 03-29-2022 19:26:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,482 | closed | Training MarianTokenizer with sentencepiece | i tried to run:
```
import sentencepiece as spm
import io
from transformers import MarianTokenizer
import json
model_en = io.BytesIO()
spm.SentencePieceTrainer.train(input='texts_en.txt',
vocab_size=125036,
max_sentence_length=512,
model_writer=model_en)
with open('target.spm', 'w') as fp:
json.dump(model_en.getvalue(), fp)
MarianTokenizer(vocab=125036,
source_spm="target.spm",
target_spm="target.spm") # for example
```
But got error:
```
/opt/conda/lib/python3.7/site-packages/transformers/models/marian/tokenization_marian.py in __init__(self, vocab, source_spm, target_spm, source_lang, target_lang, unk_token, eos_token, pad_token, model_max_length, sp_model_kwargs, **kwargs)
154 )
155 assert Path(source_spm).exists(), f"cannot find spm source {source_spm}"
--> 156 self.encoder = load_json(vocab)
157 if self.unk_token not in self.encoder:
158 raise KeyError("<unk> token must be in vocab")
/opt/conda/lib/python3.7/site-packages/transformers/models/marian/tokenization_marian.py in load_json(path)
365
366 def load_json(path: str) -> Union[Dict, List]:
--> 367 with open(path, "r") as f:
368 return json.load(f)
OSError: [Errno 9] Bad file descriptor
```
transformers version - 4.16.2
kaggle cpu env
Can someone help me? | 03-29-2022 19:20:19 | 03-29-2022 19:20:19 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@Lednik7 Have you resolved this problem?
I wanted to train a new Marian tokenizer for new language and having lots of problems with creating the tokenizer.
Can you please share the script which you used to creater tokenizer? or tell how the source and target spm files can be created and alsothe vocab files. |
transformers | 16,481 | closed | Nit: MCSCOCO -> MSCOCO | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-29-2022 19:09:47 | 03-29-2022 19:09:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,480 | closed | run_clm.py crashes server. | I was running the run_clm.py script with deepspeed for finetuning gpt2-small on a 300GB data. I had provided 5 GPUs for the process, but as soon as the preprocessing of the data was completed the server crashed and I had to get it restarted. Can you point me to where we need to make changes to the script to support finetuning with such large datasets? | 03-29-2022 17:37:54 | 03-29-2022 17:37:54 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,479 | closed | Embedding size mismatch when hyperparameter search | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux
- Python version: 3.9.4
- PyTorch version (GPU?): 1.10.2
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
- Model: @LysandreJik
- Trainer: @sgugger
- Ray/raytune: @richardliaw, @amogkam, @suquark
<!--
Models: BERT
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
My task is the relation classification, and I referred to the codes:
https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py
https://github.com/ray-project/ray/blob/65d72dbd9148b725761f733559e3c5c72f15da9a/python/ray/tune/examples/pbt_transformers/pbt_transformers.py#L12
## To reproduce
Steps to reproduce the behavior:
1. Load a pre-trained model.
2. Add custom (special) tokens for the task.
3. Optimize the model hyper-parameters using the Ray tune method.
4. Embedding size mismatch occurs as follows.
I've added two special tokens (e.g., [e], [/e]), and I got this error.
```diff
- RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:
- size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([30522, 768]) from checkpoint,
- the shape in current model is torch.Size([30524, 768]).
```
```python
tokenizer = AutoTokenizer.from_pretrained(
tokenizer_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=True,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
do_lower_case=do_lower_case,
)
# Add the special tokens. E.g., [e], [/e]
special_tokens = list(map(lambda x: x.lower(), dataset_special_tokens[dataset_name]))
tokenizer.add_tokens(special_tokens)
def get_model():
model = BertForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
# this option ignores the size mismatch, but the model performance significantly dropped!!
#ignore_mismatched_sizes=True,
)
# Resize input token embeddings matrix of the model since new tokens have been added.
# this funct is used if the number of tokens in tokenizer is different from config.vocab_size.
model.resize_token_embeddings(len(tokenizer))
return model
# Initialize the Trainer
trainer = Trainer(
model_init=get_model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
pbt_scheduler = PopulationBasedTraining(
metric="eval_f1",
mode="max",
hyperparam_mutations={
"weight_decay": [0.0, 0.01],
"warmup_ratio": [0.0, 0.1],
"learning_rate": [1e-5, 2e-5, 3e-5, 4e-5, 5e-5],
"per_device_train_batch_size": [8, 16],
"per_device_eval_batch_size": [8, 16],
"seed": tune.uniform(1,20000),
"num_train_epochs": tune.choice([2, 5, 10]),
}
)
tune_config = {
"per_device_train_batch_size": 32,
"per_device_eval_batch_size": 32,
"num_train_epochs": tune.choice([2, 3, 4, 5]),
}
def compute_objective(metrics):
return metrics["eval_f1"]
trainer.hyperparameter_search(
hp_space=lambda _: tune_config,
compute_objective=compute_objective,
direction="maximize",
backend="ray",
n_trials=10,
scheduler=pbt_scheduler,
keep_checkpoints_num=1,
checkpoint_score_attr="training_iteration",
resources_per_trial={"cpu": 40, "gpu": 2},
)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I think the error occurs due to the newly added tokens to the model. Although I resized the model, the issue hasn't been resolved. When I tried the following option, the error doesn't occurs, but the model's performance significantly dropped.
```python
BertForSequenceClassification.from_pretrained(
ignore_mismatched_sizes=True,
...
```
When I initialize the trainer using "model=model", it works fine. But, when the trainer is initialized by "model_init=get_model" which is required for hyperparameter search, the problem occurs. Can anyone help with this issue?
<!-- A clear and concise description of what you would expect to happen. -->
| 03-29-2022 16:17:18 | 03-29-2022 16:17:18 | You can't use a pretrained model with a different vocab size without the `ignore_mismatched_sizes=True` option as the weights shapes don't match. If you remove the line `tokenizer=tokenizer`, you should be able to load the pretrained model, then resize its embeddings for your added tokens.
But in general, it's best not to add tokens if you want to use a pretrained model.<|||||>Hi @sgugger,
Thanks for the comments. You mean the `tokenizer=tokenizer` in the Trainer, right? I put `tokenizer=tokenizer` in `BertForSequenceClassification.from_pretrained` as kwarg just for the debugging purpose. I removed it in the code above to avoid the confusion. I removed the line `tokenizer=tokenizer` in the Trainer, but I still can't load the pretrained model. The error comes from the function _load_state_dict_into_model() saying "size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([30522, 768]) from checkpoint, the shape in current model is torch.Size([30524, 768])". I tried to add new tokens and resize the model after Trainer initialization as follows, but I still got the error. To resolve the error, when/where should I add new tokens and resize the model?
```python
tokenizer = AutoTokenizer.from_pretrained(
tokenizer_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=True,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
do_lower_case=do_lower_case,
)
def get_model():
model = BertForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
# this option ignores the size mismatch, but the model performance significantly dropped!!
#ignore_mismatched_sizes=True,
)
return model
# Initialize the Trainer
trainer = Trainer(
model_init=get_model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
# Add the special tokens. E.g., [e], [/e]
special_tokens = list(map(lambda x: x.lower(), dataset_special_tokens[dataset_name]))
trainer.tokenizer.add_tokens(special_tokens)
# Resize input token embeddings matrix of the model since new tokens have been added.
# this funct is used if the number of tokens in tokenizer is different from config.vocab_size.
trainer.model.resize_token_embeddings(len(trainer.tokenizer))
```
<|||||>This means the config you are passing does not have the same vocab size as the pretrained model you are trying to load (you did not provide it). You should leave the vocab size of the config to the default value of the checkpoint and only resize the model token embeddings once you have loaded the model properly.<|||||>Hi @sgugger, many thanks for your help! Yes, the vocab size in the config file was the cause. When the trainer reinitialize the model, the embedding size mismatch occurs because the vocab size in config of the current model is different from the pretrained model since new tokens has been added. So, I changed the code to reload the pretrained model's config prior to the model load like this.
```python
def get_model():
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
num_labels=num_labels,
finetuning_task=data_args.task_name,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
model = BertForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
return model
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,478 | closed | Fix example test and test_fetcher for examples | # What does this PR do?
#16475 introduced a failure in the example tests that went undetected because the test_fetcher had the wrong path for the example test. This PR fixes both. | 03-29-2022 16:11:19 | 03-29-2022 16:11:19 | Confirmed this fixes the oversight in `test_fetcher` and fixes the actual example since the example tests actually ran (and passed) here. Since this is a failure on master merging without review but happy to address comments in a follow-up PR!<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,477 | closed | Add TF vision model code samples | # What does this PR do?
Add:
- TF_VISION_BASE_MODEL_SAMPLE
- TF_VISION_SEQ_CLASS_SAMPLE
I tested them manually (not through doctest).
(I will add code samples for TF speech models in another PR) | 03-29-2022 14:48:04 | 03-29-2022 14:48:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,476 | closed | Moved find_pruneable_heads_and_indices and no_init_weight to pytorch_… | …utils from modeling_utils
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-29-2022 14:40:57 | 03-29-2022 14:40:57 | |
transformers | 16,475 | closed | [MNLI example] Prevent overwriting matched with mismatched metrics | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/16474.
The fix appends `_mm` to the metrics for `mnli-mm` evaluation dataset to prevent overwriting eval metrics from the `mnli` dataset, and writes them together in `eval_results.json` and `all_results.json`. The MNLI-metrics look like this:
```json
{
"eval_accuracy": 0.36067244014263883,
"eval_accuracy_mm": 0.35506509357200977,
"eval_loss": 1.158889889717102,
"eval_loss_mm": 1.1670204401016235,
"eval_runtime": 16.4691,
"eval_runtime_mm": 15.9496,
"eval_samples": 9815,
"eval_samples_mm": 9832,
"eval_samples_per_second": 595.964,
"eval_samples_per_second_mm": 616.44,
"eval_steps_per_second": 18.641,
"eval_steps_per_second_mm": 19.311
}
```
@sgugger, @patil-suraj | 03-29-2022 10:43:28 | 03-29-2022 10:43:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The CI failure is unrelated to this PR (actually investigating it right now), so we can merge safely :-) |
transformers | 16,474 | closed | MNLI metrics overwritten | ## Description
When running the https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py example for the MNLI dataset, eval metrics calculated for the `validation_mismatched` dataset are overwriting the previously calculated metrics for the standard validation dataset.
## To reproduce
Simply run the default example script for text-classification with `TASK_NAME=mnli`.
## Expected behavior
When evaluation starts, the first one is done on the standard validation MNLI dataset and the metrics are saved to `eval_results.json` and `all_results.json` as:
```json
{
"epoch": 3.0,
"eval_accuracy": 0.8501830756712775,
"eval_loss": 0.4692825376987457,
"eval_runtime": 15.8047,
"eval_samples": 9832,
"eval_samples_per_second": 622.092,
"eval_steps_per_second": 77.761
}
```
After that, evaluation with `validation_mismatched` happens and unfortunately it's using the same keys as the previous one (`eval_accuracy`, `eval_loss`...), which causes overwriting.
In the ideal case, it would be useful to have both metrics written in these json files.
@sgugger, @patil-suraj | 03-29-2022 10:36:29 | 03-29-2022 10:36:29 | |
transformers | 16,473 | closed | Add TAPEX | # What does this PR do?
Remember [TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas), the table QA model by Google AI? Microsoft has now released TAPEX, a seq2seq model that outperforms TAPAS and is actually much simpler: table QA is just treated as a seq2seq problem.
As the weights can be directly loaded into a BART model, this PR only implements `TapexTokenizer`, which can be used to prepare tables and corresponding texts for the model.
This PR also adds 3 scripts that showcase how to fine-tune TAPEX on 3 important benchmarks: WikiSQL and WTQ for table question answering and TabFact for table fact verification.
Kudos to @SivilTaram (the original author) for improving my initial `TapexTokenizer` implementation, as well as adding the 3 fine-tuning scripts. | 03-29-2022 10:06:02 | 03-29-2022 10:06:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>`make fixup` is complaining:
```
examples/research_projects/tapex/run_wikitablequestions_with_tapex.py:50:1: F401 'wikisql_utils._TYPE_CONVERTER' imported but unused
examples/research_projects/tapex/run_wikitablequestions_with_tapex.py:50:1: F401 'wikisql_utils.retrieve_wikisql_query_answer_tapas' imported but unused
```
However, these functions are used in the script, so I can't remove these imports.<|||||>@NielsRogge Thanks for your huge effort! I personally think these two warnings are correct since these two imports are only used in `run_wikisql_with_tapex.py` instead of `run_wikitablequestions_with_tapex.py` (the hint message). I think we can remove them.
> examples/research_projects/tapex/run_wikitablequestions_with_tapex.py:50:1: F401 'wikisql_utils._TYPE_CONVERTER' imported but unused
examples/research_projects/tapex/run_wikitablequestions_with_tapex.py:50:1: F401 'wikisql_utils.retrieve_wikisql_query_answer_tapas' imported but unused
<|||||>@sgugger I've addressed all comments.<|||||>All models on the hub do work with the Auto API, can you elaborate? TAPEX is also added to `configuration_auto.py` and `modeling_auto.py`. |
transformers | 16,472 | closed | Fix blenderbot conversion script | # What does this PR do?
Fixes blenderbot conversion script. | 03-29-2022 09:31:38 | 03-29-2022 09:31:38 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16472). All of your documentation changes will be reflected on that endpoint. |
transformers | 16,471 | closed | RuntimeError: Empty or `None` reference sentence found. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.18.0.dev0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.7.11
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@patrickvonplaten , @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Marian
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
And here is a example of my dataset
> {"translation": {"Cls": "谁司票拟?", "Mdn": "谁起草的这个命令?"}}
{"translation": {"Cls": "百司章奏,置急足驰白乃下。", "Mdn": "百官的奏章,要用快马才能赶上。"}}
I organize the data as I'm told in readme. Although in readme it says that it should be jsonl file, I find that jsonl file won't work and just rename it as json works. It works well for several thousands steps.
## To reproduce
Steps to reproduce the behavior:
1. run this command `python examples/pytorch/translation/run_translation.py --model_name_or_path "Helsinki-NLP/opus-mt-zh-en" --do_train --do_eval --source_lang Cls --target_lang Mdn --source_prefix "translate Classical to Modern: " --train_file examples\pytorch\translation\train.json --validation_file examples\pytorch\translation\dev.json --test_file examples\pytorch\translation\test.json --output_dir D:/Gare-translation/Helsinki-NLP --per_device_train_batch_size=32 --per_device_eval_batch_size=16 --overwrite_output_dir --predict_with_generate --num_train_epochs=200 --save_total_limit=200 --save_steps=10000 --load_best_model_at_end True --evaluation_strategy "steps"`
2. finish first training epoch and run first evaluation
3. pops up error
> INFO|trainer.py:2412] 2022-03-29 15:36:18,174 >> ***** Running Evaluation *****
[INFO|trainer.py:2414] 2022-03-29 15:36:18,174 >> Num examples = 8438
[INFO|trainer.py:2417] 2022-03-29 15:36:18,174 >> Batch size = 4
Traceback (most recent call last):██████████████████████████████████████████████████| 2110/2110 [08:51<00:00, 3.68it/s]
File "examples/pytorch/translation/run_translation.py", line 624, in <module>
main()
File "examples/pytorch/translation/run_translation.py", line 541, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\transformers\trainer.py", line 1493, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\transformers\trainer.py", line 1620, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\transformers\trainer_seq2seq.py", line 70, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\transformers\trainer.py", line 2287, in evaluate
metric_key_prefix=metric_key_prefix,
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\transformers\trainer.py", line 2528, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "examples/pytorch/translation/run_translation.py", line 515, in compute_metrics
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\datasets\metric.py", line 430, in compute
output = self._compute(**inputs, **compute_kwargs)
File "C:\Users\Gare\.cache\huggingface\modules\datasets_modules\metrics\sacrebleu\daba8f731596c6a1a68d61f20220697f68c420a55e2096b4eea8e3ffdc406d96\sacrebleu.py", line 130, in _compute
**(dict(tokenize=tokenize) if tokenize else {}),
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\sacrebleu\compat.py", line 35, in corpus_bleu
return metric.corpus_score(hypotheses, references)
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\sacrebleu\metrics\base.py", line 421, in corpus_score
stats = self._extract_corpus_statistics(hypotheses, references)
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\sacrebleu\metrics\base.py", line 366, in _extract_corpus_statistics
ref_cache = self._cache_references(references)
File "C:\ProgramData\Anaconda3\envs\gare\lib\site-packages\sacrebleu\metrics\base.py", line 333, in _cache_references
raise RuntimeError("Empty or `None` reference sentence found.")
RuntimeError: Empty or `None` reference sentence found.
0%| | 500/3418600 [10:49<1234:06:48, 1.30s/it]
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Any advice that helps with it would be appreciated.
<!-- A clear and concise description of what you would expect to happen. -->
| 03-29-2022 08:10:45 | 03-29-2022 08:10:45 | Hi @Gare-Ng ,
> I organize the data as I'm told in readme. Although in readme it says that it should be jsonl file, I find that jsonl file won't work and just rename it as json works
The file format is expected to be `jsonl` but the scripts expect the file extension to be `.json`.
I think the reason for this error is that, there seems to be an example which has an empty sting as the translation/target. The example scripts are kept simple and easy to adapt so they don't do any extra pre-processing to detect and remove empty examples. You should inspect the dataset to remove such problematic examples and run the scripts. <|||||>Thank you @patil-suraj, and I do write a small script to check whether there is such empty things that I guess it is string there.
> which has an empty sting as the translation/target.
But it turns out that there isn't such problematic examples in my test dev and test file. I train this model without modifying json file today morning and it works every well. Vaguely I remenber it pops this errors when I decide to change a model and train. At first I think it is related to the new model, but errors remains when change model back. Till now I still get this error. I'm certain I didn't change json file. My little script is here just in case.
```
import jsonlines
jsonl_name=r'C:\Users\Gare\PycharmProjects\Gare\transformers\examples\pytorch\translation\test.json'
f=jsonlines.open(jsonl_name, "r")
lines=1
errors=[]
for i in f:
if len((i['translation']['Mdn']))==0 or len((i['translation']['Cls']))==0:
errors.append(lines)
lines += 1
if len(errors)==0:
print('No Error')
else:
print(errors)
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,470 | closed | Flax LM scripts doesn't scale-up on TPU Pod. | ## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: Linux-5.8.0-1035-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu)
- Jax version: 0.3.4
- JaxLib version: 0.3.2
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- JAX/Flax: @patil-suraj
## Information
Model I am using (Bert, XLNet ...): T5 - Flax
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
* [ X] my own task or dataset: (give details below)
The T5 MLM as well as any Masking models for flax is working fine in a single TPU node. However, when running the script on a TPU pod, it doesn't scale up correctly.
Assuming the batch size on a single tpu v4-8 node per device is 32 and the global batch size is 128 when we scale it up to a tpu v4-64, then the per device batch size of 32 doesn't work and gives OOM, and it has to be reduced to 4, which means the global batch size will be kept 128.
Furthermore, the training speed remains the same between a single node and a pod.
## To reproduce
Steps to reproduce the behavior:
1. Run any example on https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling using a single tpu node.
2. Re-run it again using a tpu pod.
## Expected behavior
We are expecting the script scale-up correctly on a tpu pod.
| 03-29-2022 07:29:03 | 03-29-2022 07:29:03 | So, I think we have figured out the problem with flax team :
https://github.com/google/flax/discussions/2017
The main problem that you send to the shard function the global batch for all hosts and nodes.
In the examples, it should send only local batch size per host.<|||||>I believe something like that should work:
replace:
`model_inputs = shard(model_inputs.data)`
with:
```
loca_host_model_inputs = {}
for key, value in model_inputs.data.items():
loca_host_model_inputs[key] = np.split(model_inputs.data[key], jax.process_count(), axis=0)[jax.process_index()]
model_inputs = shard(loca_host_model_inputs)
```
<|||||>A more simplified version:
loca_host_model_inputs = {key:np.split(model_inputs.data[key], jax.process_count(), axis=0)[jax.process_index()] for key, value in model_inputs.data.items()}<|||||>If you think it makes sense, I will create a pull request to fix this issue.<|||||>I think @patil-suraj @borisdayma are working quite a bit with TPU Pods sharding at the moment<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,469 | closed | missing trainer import | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-29-2022 04:11:08 | 03-29-2022 04:11:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,468 | closed | after pip install -e . pip list does not show the transformers package | hi, I try to install transformers from source code and find that the pip list does not find the package despite there is no error in installing
git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e .
the installing output
Obtaining file:///workspace/tmp/transformers
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.8/dist-packages (from transformers==4.18.0.dev0) (21.3)
Requirement already satisfied: requests in /usr/local/lib/python3.8/dist-packages (from transformers==4.18.0.dev0) (2.27.1)
Requirement already satisfied: tokenizers!=0.11.3,>=0.11.1 in /usr/local/lib/python3.8/dist-packages (from transformers==4.18.0.dev0) (0.11.6)
Requirement already satisfied: huggingface-hub<1.0,>=0.1.0 in /usr/local/lib/python3.8/dist-packages (from transformers==4.18.0.dev0) (0.4.0)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.8/dist-packages (from transformers==4.18.0.dev0) (4.63.1)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.8/dist-packages (from transformers==4.18.0.dev0) (2021.11.2)
Requirement already satisfied: filelock in /usr/local/lib/python3.8/dist-packages (from transformers==4.18.0.dev0) (3.6.0)
Requirement already satisfied: sacremoses in /usr/local/lib/python3.8/dist-packages (from transformers==4.18.0.dev0) (0.0.49)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.8/dist-packages (from transformers==4.18.0.dev0) (1.22.3)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.8/dist-packages (from transformers==4.18.0.dev0) (6.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.8/dist-packages (from huggingface-hub<1.0,>=0.1.0->transformers==4.18.0.dev0) (4.1.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging>=20.0->transformers==4.18.0.dev0) (3.0.7)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.8/dist-packages (from requests->transformers==4.18.0.dev0) (3.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.8/dist-packages (from requests->transformers==4.18.0.dev0) (2021.10.8)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.8/dist-packages (from requests->transformers==4.18.0.dev0) (1.26.9)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.8/dist-packages (from requests->transformers==4.18.0.dev0) (2.0.12)
Requirement already satisfied: joblib in /usr/local/lib/python3.8/dist-packages (from sacremoses->transformers==4.18.0.dev0) (1.1.0)
Requirement already satisfied: six in /usr/local/lib/python3.8/dist-packages (from sacremoses->transformers==4.18.0.dev0) (1.16.0)
Requirement already satisfied: click in /usr/local/lib/python3.8/dist-packages (from sacremoses->transformers==4.18.0.dev0) (7.1.2)
Installing collected packages: transformers
Running setup.py develop for transformers
Successfully installed transformers
part of pip list output
thinc 8.0.15
tifffile 2022.3.25
tinydb 4.7.0
tokenizers 0.11.6
torch 1.7.1+cu101
torchvision 0.8.2+cu101
tornado 5.1.1
tqdm 4.63.1
traitlets 5.1.1
typer 0.4.0
typing_extensions 4.1.1
Unidecode 1.1.1
urllib3 1.26.9 | 03-29-2022 03:31:18 | 03-29-2022 03:31:18 | solved from: https://github.com/huggingface/transformers/issues/16468 |
transformers | 16,467 | closed | fix wrong variable name | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-29-2022 02:57:57 | 03-29-2022 02:57:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,466 | open | TEAMS: Training ELECTRA Augmented with Multi-word Selection | Hi,
this [ACL paper](https://arxiv.org/abs/2106.00139) from last year (2021) proposed an ELECTRA extension:
> Pre-trained text encoders such as BERT and its variants have recently achieved state-of-the-art performances on many NLP tasks. While being effective, these pre-training methods typically demand massive computation resources. To accelerate pre-training, ELECTRA trains a discriminator that predicts whether each input token is replaced by a generator. However, this new task, as a binary classification, is less semantically informative. In this study, we present a new text encoder pre-training method that improves ELECTRA based on multi-task learning. Specifically, we train the discriminator to simultaneously detect replaced tokens and select original tokens from candidate sets. We further develop two techniques to effectively combine all pre-training tasks: (1) using attention-based networks for task-specific heads, and (2) sharing bottom layers of the generator and the discriminator. Extensive experiments on GLUE and SQuAD datasets demonstrate both the effectiveness and the efficiency of our proposed method.
Implementation is available in the TensorFlow models repository: https://github.com/tensorflow/models/tree/master/official/projects/teams
I would like to work on that, to see if it can easily be added (e.g. only writing a model conversion script).
Unfortunately, no model weights do exist at the moment. So I would like to pre-train a model and check if conversion can be done without changing the current ELECTRA implementation too much.
This issue tracks the integration into Transformers :hugs:
| 03-28-2022 22:46:26 | 03-28-2022 22:46:26 | Issue that tracks generation of pre-training data: https://github.com/tensorflow/models/issues/10567 |
transformers | 16,465 | closed | TF: properly handle kwargs in encoder_decoder architectures | # What does this PR do?
Fixes #16400
Our `input_processing` function is very strict and raises an exception if unexpected kwargs are passed. In the issue linked above, it was found that an exception was being raised and, upon further inspection, we can see that some arguments were being passed in the wrong place.
There were no tests for it (i.e. no `encoder_decoder` calls with `kwargs`), so it slipped under the cracks. This PR also adds tests for it. The updated tests were all failing before the corresponding fix. | 03-28-2022 22:12:51 | 03-28-2022 22:12:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,464 | closed | Error when training LayoutLMv2 model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.12
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script? Yes
- Using distributed or parallel set-up in script? No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@NielsRogge
## Information
The model I am using is LayoutLMv2. I'm using a pre-trained model from `LayoutLMv2ForTokenClassification` and using the `Trainer` for the training. To process the input data I'm using a data loader that:
- reads the image file and the annotations word ➙ label;
- encode the inputs using the `LayoutLMv2Processor`;
- returns the encoding
> **_Note_**: The process above was strongly based on [this notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Fine_tuning_LayoutLMv2ForTokenClassification_on_FUNSD_using_HuggingFace_Trainer.ipynb#scrollTo=bEkkxXBcm9yU)
The problem arises randomly during the training phase. Therefore, the training is interrupted, and the following traceback is shown in the terminal:
```
Traceback (most recent call last):
File "train.py", line 77, in <module>
main()
File "train.py", line 70, in main
trainer.train()
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/transformers/trainer.py", line 1280, in train
tr_loss += self.training_step(model, inputs)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/transformers/trainer.py", line 1775, in training_step
loss = self.compute_loss(model, inputs)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/transformers/trainer.py", line 1807, in compute_loss
outputs = model(**inputs)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 1171, in forward
return_dict=return_dict,
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 894, in forward
position_ids=visual_position_ids,
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 755, in _calc_img_embeddings
visual_embeddings = self.visual_proj(self.visual(image))
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 583, in forward
features = self.backbone(images_input)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/detectron2/modeling/backbone/fpn.py", line 126, in forward
bottom_up_features = self.bottom_up(x)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/detectron2/modeling/backbone/resnet.py", line 449, in forward
x = stage(x)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/torch/nn/modules/container.py", line 119, in forward
input = module(input)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/detectron2/modeling/backbone/resnet.py", line 201, in forward
out = self.conv3(out)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/dtw2/anaconda3/envs/layoutlmv2/lib/python3.7/site-packages/detectron2/layers/wrappers.py", line 107, in forward
x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups
RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR
```
Initially, I thought this was happening because of some problem related to the data in the current training batch; however, when training singly with the "problematic" data, no problem was raised.
Regarding the "problematic" input data, I've checked the image metadata and content, and they seem to be fine. Following this, I noticed that by removing the "problematic" data from the dataset, the training was able to surpass the previous breaking point until randomly breaking with another batch.
After analyzing the traceback, I decided to write my own `training_step` or `compute_loss` methods to replace the default ones in the `Trainer`, since the problem was occurring in them. The objective was to handle the error by advancing to the next available batch or returning the previous loss without computing the backward propagation for the current loss; however, this wasn't possible since the error was persistent, thus, stopping the training process.
Any tips on how to fix/handle this?
| 03-28-2022 21:51:32 | 03-28-2022 21:51:32 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@GustavoStahl I had a similar issue and I found that some of my bboxes had values outside of the expected `0` to `1000` range. In my case they were negative.
I found this by running in CPU mode.
I would recommend the [`normalize_bbox` function in the docs](https://huggingface.co/docs/transformers/main/en/model_doc/layoutlmv2) be updated to handle this.
Maybe something like
```python3
def normalize_bbox(bbox, width, height):
bbox = [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
# handle edge case of coord < 0 or coord > 1000
return [
min(max(0, coord), 1000)
for coord in bbox
]
```
or
```python3
def normalize_bbox(bbox, width, height):
bbox = [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
assert(all(0<=coord<=1000 for coord in bbox))
return bbox
``` |
transformers | 16,463 | closed | Export longformer to ONNX | Hi :) I need to convert a pre-trained pytorch model to tensorflow 2. I'm trying to follow these instructions https://huggingface.co/docs/transformers/serialization#exporting-a-model-to-onnx (export to onnx first).
But I'm having this error:
KeyError: "longformer is not supported yet. Only ['albert', 'bart', 'mbart', 'bert', 'ibert', 'camembert', 'distilbert', 'marian', 'm2m-100', 'roberta', 't5', 'xlm-roberta', 'gpt2', 'gpt-neo', 'layoutlm', 'electra'] are supported. If you want to support longformer please propose a PR or open up an issue."
Is it possible to add support to longformer, please? Thanks
| 03-28-2022 20:38:37 | 03-28-2022 20:38:37 | > Is it possible to add support to longformer, please? Thanks
Hey @anasilvasd we have created a group recently that aims to add ONNX support for all models on the hub. Maybe you could help us by doing the implementation yourself?
Here is the link of the issue relating all the models that need help #16308
And here is the HF org if you want to join us: [ONNXConfig for all](https://huggingface.co/OWG)
You will find help on implementation by reading issue post and looking at others implementation linked in the issue below. Have a great week-end<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,462 | closed | add doctests to TF ViT | # What does this PR do?
Add doctests for the TF version of ViT
Fixes # (issue)
#16292
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
| 03-28-2022 19:59:49 | 03-28-2022 19:59:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, @johko
Thank you for this PR! Really appreciated.
I just realized that we have [PT_VISION_BASE_MODEL_SAMPLE](https://github.com/huggingface/transformers/blob/c85547af2b69f9082bcd7bac97092b1d162f3fdc/src/transformers/utils/doc.py#L541), but not `TF_VISION_BASE_MODEL_SAMPLE`.
Ideally, we would like to re-use code sample in [doc.py](https://github.com/huggingface/transformers/blob/main/src/transformers/utils/doc.py). I will need to discuss with the team to make a decision.
I will come back to you once we have a decision. Sorry for the inconvenience!<|||||>Hi, @johko
After the discussion with the team, we think it would be really better for us to add the following in `doc.py`
```
TF_VISION_BASE_MODEL_SAMPLE
TF_VISION_SEQ_CLASS_SAMPLE
```
(as already done in PyTorch side).
I will open a PR to add these, and keep you updated when that PR is merged. Thank you.<|||||>@johko
```
TF_VISION_BASE_MODEL_SAMPLE
TF_VISION_SEQ_CLASS_SAMPLE
```
are added in this PR (and already merged to `main`) #16477
Once you pull the upstream's `main` in your local clone's `main` (or `master`), and rebase or merge it into your working branch, the process is just a matter of using **@add_code_sample_docstrings** together with **expected_output**, **checkpoint**, etc. You can take this as a reference:
https://github.com/huggingface/transformers/pull/16363/files#diff-5707805d290617078f996faf1138de197fa813f78c0aa5ea497e73b5228f1103
Regarding the checkpoint to use:
- TFViTModel
"google/vit-base-patch16-224-in21k"
- TFViTForImageClassification
"google/vit-base-patch16-224"
I manually checked the code sample. Let me know if you encounter any issue.<|||||>@ydshieh
Thanks for the explanation, I'll look into it and implement the changes within the next days<|||||>@ydshieh
Ah, sorry I messed up the history now, I'm still getting used to the VS Code Git plugin.
But I added the `add_code_sample_docstrings` decorator to `modeling_tf_vit.py` now<|||||>@johko
Thank you for the update!
Regarding the commit history, could you try to fix it. I never need to deal with this situation so far, but here are a few threads I could find (potentially) relevant:
- https://stackoverflow.com/questions/44000096/github-pull-request-displaying-too-many-file-changes
- https://stackoverflow.com/questions/56708751/github-pull-requests-showing-more-and-more-old-merges
Please note that these links are merely an attempt to fix the current situation, I could not guarantee that the mentioned methods would work well and without any problem.
In any case, since the **real** change is very small, you can always update the master/main branch of your local clone, and create a new branch + add the changes there, then open a new PR.
Thank you for your understanding!<|||||>Thank you, I'll look into it and try to fix it. In the end, as you mentioned I might always just make a new branch from my local master<|||||>@ydshieh I'll close this PR and create a new one with a clean history. It seems VS Code used _git sync_ and this messes up the history after a rebase<|||||>No problem, @johko . Thank you for the effort! |
transformers | 16,461 | closed | Add inputs vector to calculate metric method | # What does this PR do?
This is a PR suggestion for including the inputs in the EvalPrediction object to perform metrics calculation that depends on inputs. For example, simplification metrics such as SARI not only use the predictions and references but also the inputs for the score calculation.
The proposed implementation will enable the Trainer to work with the metrics class. However, the compute_metrics method should be implemented locally (in the metrics file, for example), since the original method still receives predictions and references.
Supports #15966
## Who can review?
@sgugger
| 03-28-2022 19:40:09 | 03-28-2022 19:40:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Unfortunately, we can't change the `EvalPredictions` namedtuple structure like this as it would be a massive breaking change. Users would suddenly have to unpack it with three objects instead of two, so every `compute_metrics` function out there in the wild would suddenly fail.<|||||>Ok, spent a bit of time on this and to enable an `EvalPredictions` that work with/without inputs, the class should be replaced by the following one:
```py
class EvalPrediction:
"""
Evaluation output (always contains labels), to be used to compute metrics.
Parameters:
predictions (`np.ndarray`): Predictions of the model.
label_ids (`np.ndarray`): Targets to be matched.
inputs (`np.ndarray`, *optional*): Inputs of the model.
"""
def __init__(
self,
predictions: Union[np.ndarray, Tuple[np.ndarray]],
label_ids: Union[np.ndarray, Tuple[np.ndarray]],
inputs: Optional[Union[np.ndarray, Tuple[np.ndarray]]] = None,
):
self.predictions = predictions
self.label_ids = label_ids
self.inputs = inputs
def __iter__(self):
if self.inputs is not None:
return iter((self.predictions, self.label_ids, self.inputs))
else:
return iter((self.predictions, self.label_ids))
def __getitem__(self, idx):
if idx < 0 or idx > 2:
raise IndexError("tuple index out of range")
if idx == 2 and self.inputs is None:
raise IndexError("tuple index out of range")
if idx == 0:
return self.predictions
elif idx ==1:
return self.label_ids
elif idx == 2:
return self.inputs
```
Then we should add a flag so the user can choose whether or not they want the inputs included for metrics or not (default `False` so that there is no change). With those two things, we can enable your use case while maintaining backward compatibility.
Can you include them in your PR?<|||||>Hi @sgugger ,
Thanks for adding this change. I've tested this code with the summarization examples from pytorch and I'm afraid this change will break other people's code. The problem is that since the inputs are passed in the trainer by default (based on my PR):
**trainer.py**
```
self.compute_metrics(EvalPrediction(inputs=all_inputs, predictions=all_preds, label_ids=all_labels))
```
The inputs in `EvalPrediction` will never be `None`, and then, it will fail when returning the predictions in `compute_metrics`:
**run_summarization.py**
```
def compute_metrics(eval_preds):
preds, labels = eval_preds
```
```
File "./transformers/examples/pytorch/summarization/run_summarization.py", line 575, in compute_metrics
preds, labels = eval_preds
ValueError: too many values to unpack (expected 2)
```
You can see in the debugger that all 3 (inputs, preds and labels) are coming:
<img width="934" alt="image" src="https://user-images.githubusercontent.com/6901031/161257829-5c146d32-2154-4151-bc7a-ae75b5470267.png">
In my case, I have my own implementation for compute_metrics, however, for other users it will fail by default. When you suggest the flag, where is it located? In the `class Trainer` and then an `if` before each call of `EvalPrediction`?
This is my testing case:
```
./transformers/examples/pytorch/summarization/run_summarization.py
--model_name_or_path
t5-small
--do_train
--do_eval
--train_file test.json
--validation_file test.json
--source_prefix
"summarize: "
--output_dir
/tmp/tst-summarization
--overwrite_output_dir
--per_device_train_batch_size=4
--per_device_eval_batch_size=4
--predict_with_generate
```
**test.json**
```
{"text": "I'm sitting here in a boring room. It's just another rainy Sunday afternoon. I'm wasting my time I got nothing to do. I'm hanging around I'm waiting for you. But nothing ever happens. And I wonder", "summary": "I'm sitting in a room where I'm waiting for something to happen"}
{"text": "I see trees so green, red roses too. I see them bloom for me and you. And I think to myself what a wonderful world. I see skies so blue and clouds so white. The bright blessed day, the dark sacred night. And I think to myself what a wonderful world.", "summary": "I'm a gardener and I'm a big fan of flowers."}
{"text": "Christmas time is here. Happiness and cheer. Fun for all that children call. Their favorite time of the year. Snowflakes in the air. Carols everywhere. Olden times and ancient rhymes. Of love and dreams to share", "summary": "It's that time of year again."}
```
Thanks,
Laura<|||||>> The problem is that since the inputs are passed in the trainer by default (based on my PR):
As I said above, this needs to be controlled by a flag (for instance `include_inputs_for_metrics`) in `TrainingArguments` which would be `False` by default to avoid any breaking change.<|||||>Thanks for the clarification, that sounds better. I've submitted an additional commit in the PR. Is the first time I update a PR, let me know if I've missed something. <|||||>> src/transformers/trainer.py
I thought about the OOM issue, but should I add:
`if self.include_inputs_for_metrics:`
Everywhere in the code? I can't think of a more elegant way :)
And for the new changes, is it ok just another commit in the pull request just like I did the last time? Just checking, so I don't do a mess :P<|||||>> should I add `if self.include_inputs_for_metrics:` Everywhere in the code? I can't think of a more elegant way :)
You should make sure that the inputs are left as None like the labels/losses by only setting some inputs when the flag is False. I've left a comment to show you where.
> And for the new changes, is it ok just another commit in the pull request just like I did the last time? Just checking, so I don't do a mess :P
That's completely ok, we will squash everything when merging.<|||||>Changes done, let me know any further feedback :)<|||||>I've added the requested changes. Also, I've run `make fixup`, so there are also some automatic indentation changes as well. <|||||>Thanks! There is one last issue with a docstring badly formatted. Could you run `make style` on your branch?<|||||>Sure, no problem. I'm having issues with the dependencies to run `make style`, could you please let me know what's the full command for installing all the suite?
`pip install .[quality] `
This is my log:
```
% make style
black examples tests src utils
Skipping .ipynb files as Jupyter dependencies are not installed.
You can fix this by running ``pip install black[jupyter]``
All done! ✨ 🍰 ✨
1519 files left unchanged.
isort examples tests src utils
WARNING: Unable to parse file examples due to [Errno 21] Is a directory: './transformers/examples'
WARNING: Unable to parse file tests due to [Errno 21] Is a directory: './transformers/tests'
WARNING: Unable to parse file src due to [Errno 21] Is a directory: './transformers/src'
WARNING: Unable to parse file utils due to [Errno 21] Is a directory: './transformers/utils'
/Library/Developer/CommandLineTools/usr/bin/make autogenerate_code
running deps_table_update
updating src/transformers/dependency_versions_table.py
/Library/Developer/CommandLineTools/usr/bin/make extra_style_checks
python utils/custom_init_isort.py
doc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source
make[1]: doc-builder: No such file or directory
make[1]: *** [extra_style_checks] Error 1
make: *** [style] Error 2
```<|||||>It looks like your branch is not up to par with the main branch for the setup. Can you manually install `pip install hf-doc-builder`? It should be in the quality extras but maybe you don't have it because it was added recently-ish.<|||||>Now is working :) Thanks for that one. I've committed the updated file.
```
% make style
black examples tests src utils
Skipping .ipynb files as Jupyter dependencies are not installed.
You can fix this by running ``pip install black[jupyter]``
All done! ✨ 🍰 ✨
1519 files left unchanged.
isort examples tests src utils
/Library/Developer/CommandLineTools/usr/bin/make autogenerate_code
running deps_table_update
updating src/transformers/dependency_versions_table.py
/Library/Developer/CommandLineTools/usr/bin/make extra_style_checks
python utils/custom_init_isort.py
doc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source
Overwriting content of src/transformers/training_args.py.
Cleaned 1 files
```<|||||>Yes, it's all good now. Thanks again for your contribution!<|||||>Awesome! :) Thanks for leading this effort, good job :D |
transformers | 16,460 | closed | Remove duplicate mLuke | This PR removes a duplicate [mLuke](https://huggingface.co/docs/transformers/main/en/model_doc/mluke) from the model API docs. | 03-28-2022 19:16:20 | 03-28-2022 19:16:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,459 | closed | doctest for BART and IBERT | # What does this PR do?
Adds BART and IBERT to the doc tests. | 03-28-2022 18:47:52 | 03-28-2022 18:47:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16459). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @ydshieh , I would need your help to move forward in my case <|||||>Hi, @abdouaziz
Sorry for the late response. The Bart model is in fact already done by our internal developer.
Could you revert your change on the Bart model, please?
Regarding iBERT, I will review later. Don't hesitate to drop a comment if you get any problem.
Apologies for this situation 🙏<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,458 | closed | How to improve accuracy (Model on TrOCR) | I use TrOCR model to generate the math expression LATEX sequence according to the handwritten math expression image .
but the consequences are unsatisfactory ;below are eval/loss 、train loss and the repository :https://github.com/win5923/TrOCR-Handwritten-Mathematical-Expression-Recognition




I use CROHME 2014 to test this model but worse than above image's models
| 03-28-2022 18:34:30 | 03-28-2022 18:34:30 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Not exactly, but i think your pretrained_model's tokenizer is not optimized your math latex. so
you should use trained latex tokenizer.
find it or collect latex data and create a new tokenizer using sentencepiece
|
transformers | 16,457 | closed | [Benchmark] | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here!
| 03-28-2022 18:04:51 | 03-28-2022 18:04:51 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,456 | closed | Deberta gives completelly wrong and random output tested on multiple machines and multiple versions of transformers | I do not understand why would you add this model here write lengthy documentation and let it not work at all.
**Its prediction is: The capital of France is plunge**
I have tried on multiple machines and multiple version of transformers and the results are just random.
from transformers import pipeline
unmasker = pipeline('fill-mask', model='deberta-base')
the_out = unmasker("The capital of France is [MASK].")
print("the_out",the_out)
As you can see the deberta results is completely wrong, there is some big error in porting it to transformers.
the_out [{'score': 0.001861382625065744, 'token': 18929, 'token_str': 'ABC', 'sequence': 'The capital of France isABC.'}, {'score': 0.0012871784856542945, 'token': 15804, 'token_str': ' plunge', 'sequence': 'The capital of France is plunge.'}, {'score': 0.001228992477990687, 'token': 47366, 'token_str': 'amaru', 'sequence': 'The capital of France isamaru.'}, {'score': 0.0010126306442543864, 'token': 46703, 'token_str': 'bians', 'sequence': 'The capital of France isbians.'}, {'score': 0.0008897537481971085, 'token': 43107, 'token_str': 'insured', 'sequence': 'The capital of France isinsured.'}]
from transformers import pipeline
unmasker = pipeline('fill-mask', model='bert-base-uncased')
the_out = unmasker("The capital of France is [MASK].")
print("the_out",the_out)
The bert result is good:
the_out [{'score': 0.41678911447525024, 'token': 3000, 'token_str': 'paris', 'sequence': 'the capital of france is paris.'}, {'score': 0.07141649723052979, 'token': 22479, 'token_str': 'lille', 'sequence': 'the capital of france is lille.'}, {'score': 0.06339272856712341, 'token': 10241, 'token_str': 'lyon', 'sequence': 'the capital of france is lyon.'}, {'score': 0.04444753751158714, 'token': 16766, 'token_str': 'marseille', 'sequence': 'the capital of france is marseille.'}, {'score': 0.030297178775072098, 'token': 7562, 'token_str': 'tours', 'sequence': 'the capital of france is tours.'}]
@LysandreJik
| 03-28-2022 17:56:12 | 03-28-2022 17:56:12 | I don't think the pre-trained language model weights are available for deberta-base, so the output looks completely random. See here for more info: https://github.com/huggingface/transformers/issues/15216<|||||>I see but it does not also work for base-v3 as well I will check other models and see.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,455 | closed | T5base loss function | Hi!
I am now using the T5-base pretrained model for translation. For the loss function, it seems that you did not get rid of the loss
for the <padding> token. (ie. you did not use ignore index in cross entropy loss).
I used model(x_id,x_attn,y_id,y_attn).loss to get the loss from the pre-trained model. It seems that the value is the same as the crossentropy loss without ignore padding as followed.
```
#model loss
logits = temp.logits
model_loss = temp.loss
print('model_loss',model_loss)
#my loss
_criterion= torch.nn.CrossEntropyLoss( reduction='none')# **without ignoring padding**
loss_seq = self._criterion(logits.view(-1,logits.shape[-1]), target_ids.view(-1)).view(batch_size,-1)
loss_vec = torch.mean(loss_seq ,-1).squeeze()
print('mylossvecmean',loss_vec.mean())
```
Btw, is there any example for fine-tuning T5 for translation? I failed to achieve the BLEU they showed in the paper.
Thanks! | 03-28-2022 17:02:03 | 03-28-2022 17:02:03 | Hey @KevinZhoutianyi,
Sorry I don't fully here exactly what the problem is. Could you add a reproducible code snippet that shows the problem?
Regarding fine-tuning T5 for translation, there are some examples on the forum: https://discuss.huggingface.co/ I believe.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I have also noticed that, and still no idea why the implemented loss function of T5 (and BART) does not set "ignore_index" ( [nn.CrossEntropyLoss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html)). I think PAD token should be ignored when computing loss.<|||||>Update: I guess I have found the reason. The implementation uses a default setting of "nn.CrossEntropyLoss" function, where the "ignore_index" is set to -100. So, when computing the loss, you need to convert the labels pad index, shown as below:
.
See: [https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model
)
|
transformers | 16,454 | closed | Fix some TF GPT-J CI testings | # What does this PR do?
Fix some TF-GPT-J CI testing (scheduled)
- `test_mixed_precision`: require some casting
- `test_saved_model_creation` and `test_saved_model_creation_extended`: require `shape_list` instead of `shape`
- `test_model_from_pretrained`: skip for now otherwise GPU OOM
With the changes this PR, only the following test fails: `test_gptj_sample_max_time`: for example
https://github.com/huggingface/transformers/blob/c85547af2b69f9082bcd7bac97092b1d162f3fdc/tests/gptj/test_modeling_tf_gptj.py#L413
the PT gives a quite short generation sequence (say 19), while TF gives a sequence of length 256, and it takes much more time and therefore fails the tests.
I feel this remaining issue is better to be addressed in another PR. | 03-28-2022 16:48:55 | 03-28-2022 16:48:55 | I need to check why
```
@unittest.skipIf(len(tf.config.list_physical_devices("GPU")) > 0, "skip testing on GPU for now to avoid GPU OOM.")
```
causes problems in other tests (torch, pipeline etc ...)
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Regarding the `max_time`/generate test -- when I first reviewed the GPT-J PR, I had little understanding of the current state of TF generate. Now I can tell that this test makes no sense :D Contrarily to PT's generate, TF generate has no time-based stopping criteria, so it is natural that the test fails. I'd remove it.<|||||>> Regarding the `max_time`/generate test -- when I first reviewed the GPT-J PR, I had little understanding of the current state of TF generate. Now I can tell that this test makes no sense :D Contrarily to PT's generate, TF generate has no time-based stopping criteria, so it is natural that the test fails. I'd remove it.
OK, thank you for the feedback.
But just curious (off-topic): `TF generate has no time-based stopping criteria` --> do we plan to support this in the future ??
(not very sure, but I remembered before TF can generate short sequences too. And if TF can't stop earlier, it looks like a quite big drawback ..? Anyway, we shouldn't discuss this generation thing in this PR.)<|||||>> But just curious (off-topic): `TF generate has no time-based stopping criteria` --> do we plan to support this in the future ??
>
> (not very sure, but I remembered before TF can generate short sequences too. And if TF can't stop earlier, it looks like a quite big drawback ..? Anyway, we shouldn't discuss this generation thing in this PR.)
The plan we have for the refactoring does not mention extras like the stopping criteria, so I can only tell that it probably won't happen in the next 2-3 months :) We can generate short sentences with TF if we pass the `max_length` argument, where `generate()` generates up to `max_length` tokens. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.