repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 15,746 | closed | Add model specific output classes to PoolFormer model docs | # What does this PR do?
This PR adds 2 model-specific outputs (defined in the `modeling_poolformer.py`) to the PoolFormer docs - `PoolFormerModelOutput` and `PoolFormerClassifierOutput`
## Who can review?
@NielsRogge | 02-21-2022 04:55:37 | 02-21-2022 04:55:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,745 | closed | BertForSequenceClassification defines criterion at every forward pass | ## Environment info
I don't think, That any environmental info will be helpful in my issue
### Who can help
@LysandreJik, Can you look into the code ?
I just want anyone to have a look here https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L1579
`loss_fct = CrossEntropyLoss()`
Inside the forward function of `BertForSequenceClassification`, Instance of `CrossEntropyLoss` is created.
Which directly means that at every forward pass to the model we will create a instance of `CrossEntropyLoss`
Isn't this a performance bug ? | 02-20-2022 21:14:07 | 02-20-2022 21:14:07 | I'm planning to post a pull-request, If someone from the official contributors confirms that this is a bug
Because I'm still not sure if this is a bug or my understanding is un-clear<|||||>Ok, I myself found a quick-fix for this
If we don't want to create `CrossEntropyLoss`'s instance at every forward step, Then we can avoid passing labels to the forward functions and then calculate the loss as usual from `logits` from the model's output |
transformers | 15,744 | closed | Problems loading csv dataset in examples `run_summarization.py` | Hi,
the following lines in `run_summarization.py` have some issues.
https://github.com/huggingface/transformers/blob/2c2a31ffbcfe03339b1721348781aac4fc05bc5e/examples/pytorch/summarization/run_summarization.py#L344-L346
According to the datasets documentation a csv dataset is loaded by providing `data_files`. See https://huggingface.co/docs/datasets/loading.html#csv
This is not done here. AFAIK the code is not able to load a local dataset at all.
Furthermore, the dataset is always automatically loaded as a "train" dataset (which is default).
So it is not possible to specify a validation or test dataset. | 02-20-2022 20:37:04 | 02-20-2022 20:37:04 | Hi @PhilipMay
This part of code https://github.com/huggingface/transformers/blob/a63bd3675f3fa1a6154c8bf1d085c66eaea67e56/examples/pytorch/summarization/run_summarization.py#L347-L358
handles loading `csv` or `json` files. It also handles train and valid and test splits. Have you tried passing the `train_file` , `validation_file` options to the script ? <|||||>Yes. That is right. I have confused the two if branches. Sorry. |
transformers | 15,743 | closed | changed documentation for Trainer.predict() | # Fix a documentation error in Trainer.predict
Fixes # (issue)
## Before submitting
- [*] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [*] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-20-2022 18:30:10 | 02-20-2022 18:30:10 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15743). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,742 | closed | Documentation error in Trainer.predict when output_hidden_states is True | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0.dev0
- Platform: macOS-11.4-x86_64-i386-64bit
- Python version: 3.9.8
- PyTorch version (GPU?): 1.10.2 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@sgugger
## Information
Model I am using (Bert, XLNet ...): sentence-transformers/all-mpnet-base-v3
The problem arises when using:
* [*] my own modified scripts: (give details below)
The tasks I am working on is:
* [*] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run script below, which is standard Trainer loop, except that model has `output_hidden_states == True`.
```
import numpy as np
import transformers
from transformers import (
AutoTokenizer,
AutoModelForSequenceClassification,
DataCollatorWithPadding,
TrainingArguments,
Trainer,
)
from datasets import load_dataset
dataset_name = "banking77"
data = load_dataset(dataset_name)
names = data["train"].features["label"].names
model_name = "sentence-transformers/all-mpnet-base-v2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(
model_name, output_hidden_states=True, num_labels=len(names)
)
def preprocess_function(examples):
return tokenizer(examples["text"], truncation=True)
tokenized_data = data.map(preprocess_function, batched=True)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
training_args = TrainingArguments(
output_dir="./results",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=5,
weight_decay=0.01,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_data["train"],
eval_dataset=tokenized_data["test"],
tokenizer=tokenizer,
data_collator=data_collator,
)
# slow on CPU, but never mind
trainer.train()
#
predictions = trainer.predict(tokenized_data["test"])
assert isinstance(predictions, transformers.trainer_utils.PredictionOutput)
assert isinstance(predictions.predictions, tuple)
assert isinstance(predictions.label_ids, np.ndarray)
assert isinstance(predictions.metrics, dict)
logits = predictions.predictions[0]
assert isinstance(logits, np.ndarray)
hidden_states = predictions.predictions[1]
assert isinstance(hidden_states, tuple)
```
## Expected behavior
Runs without incident, shows that when we have `output_hidden_states=True`, predictions.predictions is a `tuple`, not an `ndarray`. Documentation of Trainer.predict currently says result will be ndarray, but it isn't, so documentation of Trainer.predict
should finish with:
```
Returns: *NamedTuple* A namedtuple with the following keys:
- predictions (Union[`np.ndarray`,`Tuple`]): The predictions on `test_dataset`. Will be np.ndarray if
`output_hidden_states` is `False`, or 2-element tuple if `output_hidden_states` is `True`.
- label_ids (`np.ndarray`, *optional*): The labels (if the dataset contained some).
- metrics (`Dict[str, float]`, *optional*): The potential dictionary of metrics (if the dataset contained
labels).
```
| 02-20-2022 18:18:24 | 02-20-2022 18:18:24 | pull request #15743 provides the suggested documentation change.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,741 | closed | Fix undoing preprocessing step in summarization example | # What does this PR do?
Before, the input and target would be after doing the none-check preprocessing step, making that preprocessing step redundant. This was an error and is fixed in this commit.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patil-suraj
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-20-2022 14:33:03 | 02-20-2022 14:33:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,740 | closed | Fix minor comment typos | Fixes to comment typos | 02-20-2022 14:26:17 | 02-20-2022 14:26:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,739 | closed | Add compatibility for Postponed Evaluation of Annotations (PEP 563) | Hello,
The code says that it will add compatibility for Postponed Evaluation of Annotations ([PEP 563](https://www.python.org/dev/peps/pep-0563/)) when Python 3.9 is released (which already happened on 2020.10.5). Is there any plan to complete this?
https://github.com/huggingface/transformers/blob/2c2a31ffbcfe03339b1721348781aac4fc05bc5e/src/transformers/hf_argparser.py#L85-L90 | 02-20-2022 09:20:15 | 02-20-2022 09:20:15 | Hey! We don't have to do the bandwidth to do it right now, but we'd welcome contributions! Let me tag this as a first good issue, and let me know if you're interested in taking a stab at it!<|||||>I'm glad to help with that, maybe it'll take some time. I never contribute here, I'll try to follow the CONTRIBUTING.md, post progress here and submit PR later, any discussion telling me if I'm doing right would be great. <|||||>According to [discussion here](https://bugs.python.org/issue39442) and solution provided by [Pydantic](https://pydantic-docs.helpmanual.io/usage/postponed_annotations/), we may just call [typing.get_type_hints](https://docs.python.org/3.9/library/typing.html#typing.get_type_hints) on some dataclass to get type of a field instead of relying on `field.type`.
Also, `typing` module is still under development, thus changes notably across different versions of Python. Since Python 3.6 reached its end-of-life last year (https://endoflife.date/python), dropping support for Python 3.6 would be reasonable and make this implementation much easier as well. There seems to be no plan on this (see also #15720).<|||||>I understand; as mentioned in the thread you linked, we're unlikely to drop support for Python 3.6 just yet, but we'll definitely keep in mind that this particular issue would be solved in a simpler manner were we to drop support for it.<|||||>I'm glad to help with that, maybe it'll take some time. Like [sparkanime.com](https://sparkanime.com/)<|||||>@rahimt420 Hello, thanks for the offer! However, I've already submitted #15795 to resolve this issue. It will be great if you could leave any suggestions there! |
transformers | 15,738 | closed | Can I training an XLM model from scratch by transformers? | How can I pretain XLM by transformers? Any example? Please. | 02-20-2022 05:02:11 | 02-20-2022 05:02:11 | Hey @thomas-li-sjtu, you definitely can. I recommend that you check out [this HF blog post](https://huggingface.co/blog/how-to-train). Although the article doesn't use XLM, the process should roughly look the same.
Also, a gentle reminder to use the [discussion forum](https://discuss.huggingface.co) for inquiries. GitHub Issues is used for bug reports and feature requests!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,737 | closed | [WIP] Integrate OSLO for tensor parallelism support | # What does this PR do?
This PR integrates [OSLO](https://github.com/tunib-ai/oslo) in a similar fashion to DeepSpeed. OSLO enables tensor parallelism and kernel fusion, among others.
## To-do
- [ ] Tests
- [ ] Adding OSLO as dependency
- [ ] Review/discuss model save logic
## Who can review?
| 02-20-2022 02:26:30 | 02-20-2022 02:26:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15737). All of your documentation changes will be reflected on that endpoint.<|||||>To-do's for self-documentation:
- [ ] Revisit `should_save`
- [ ] Test with DDP and FairScale
- [ ] Think about how to toggle `merge_checkpoint`
- [ ] Debug DeepSpeed (model param size 1)
- [ ] Use `oslo_initialized` attribute to check if OSLO was initialized MPU<|||||>Unstale<|||||>Closing due to changes in OSLO's API as discussed internally. This PR will be superseded by a future PR once OSLO 3 is released and stable. |
transformers | 15,736 | closed | `XGLMForCausalLM` does not compute `position_ids` correctly when using `inputs_embeds` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu102 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj
## Information
Model I am using (Bert, XLNet ...): XGLM
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Try inference on an XGLM model when using `inputs_embeds` as the input method to the `forward` function of `XGLMForCausalLM`, i.e. run this code:
```python
# Patch create_position_ids_from_inputs_embeds so that we can see the
# position IDs it outputs
if "PATCHED" not in globals():
from transformers.models.xglm.modeling_xglm import XGLMSinusoidalPositionalEmbedding
original_create_position_ids = XGLMSinusoidalPositionalEmbedding.create_position_ids_from_inputs_embeds
def create_position_ids(*args, **kwargs):
position_ids = original_create_position_ids(*args, **kwargs)
print(position_ids)
return position_ids
XGLMSinusoidalPositionalEmbedding.create_position_ids_from_inputs_embeds = create_position_ids
PATCHED = True
# Try inference on an XGLM model
from transformers import XGLMTokenizer, XGLMForCausalLM
class CustomXGLMForCausalLM(XGLMForCausalLM):
def forward(self, *args, **kwargs):
if "input_ids" in kwargs and "inputs_embeds" not in kwargs:
kwargs["inputs_embeds"] = model.model.embed_tokens(kwargs.pop("input_ids")) * model.model.embed_scale
return super().forward(*args, **kwargs)
tokenizer = XGLMTokenizer.from_pretrained("facebook/xglm-564M")
model = CustomXGLMForCausalLM.from_pretrained("facebook/xglm-564M")
prompt = (
"In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
"previously unexplored valley, in the Andes Mountains. Even more surprising to the "
"researchers was the fact that the unicorns spoke perfect English."
)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids=input_ids,
do_sample=False,
min_length=60,
max_length=60,
)
```
## Expected behavior
The printed output should be as shown below because the position IDs should be increasing as the model generates tokens. The expected behavior is exhibited by other models such as GPT-Neo but not by this model.
```
tensor([[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53]])
tensor([[54]])
tensor([[55]])
tensor([[56]])
tensor([[57]])
tensor([[58]])
tensor([[59]])
tensor([[60]])
```
## Actual behavior
```
tensor([[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53]])
tensor([[2]])
tensor([[2]])
tensor([[2]])
tensor([[2]])
tensor([[2]])
tensor([[2]])
tensor([[2]])
``` | 02-20-2022 00:39:17 | 02-20-2022 00:39:17 | Good catch! Fix is here #15751 |
transformers | 15,735 | closed | `DebertaTokenizer` always assigns token type ID 0 | ## Environment info
- `transformers` version: 4.16.2
- Platform: Linux-5.15.13-051513-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): `microsoft/deberta-large`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run this code:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-large")
print(tokenizer("Hello", "World"))
```
It outputs:
```
{'input_ids': [1, 31414, 2, 10988, 2], 'token_type_ids': [0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1]}
```
Even though I put in two sequences, all `token_type_ids` are 0.
## Expected behavior
The tokens from the second sequence should get type ID 1. `token_type_ids` should be `[0, 0, 0, 1, 1]`. | 02-19-2022 19:42:09 | 02-19-2022 19:42:09 | Looks like this is the change that introduced this behavior.
https://github.com/huggingface/transformers/commit/57c1749efabf5c86bcfd4e4e078567a63a7c8a81#diff-7ff4f35b72b8541520ea52c851b55bc2682da83e01e6e0ceeb5289f7dd2f0620R217
<|||||>Good catch! Would you like to open a PR to fix this?<|||||>I'll give this a try! |
transformers | 15,734 | closed | Self attention in T5 decoder does not work as expected | Hello!
I have been struggling with this problem for a while. I was trying to train T5 using just 2 layers in the encoder and 2 layers in the decoder from scratch on MLM task. Drawing self-attention of the decoder I saw that the attention weights from the first epoch are not masked properly. where the max_seq_length of the masked sequence is 128. I have trained this [tokenizer](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling#train-tokenizer-2) on wiki-103 text and HotpotQA text then I built a masking PyTorch pipeline following the paper steps depending on this [script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_t5_mlm_flax.py). This [repo](https://github.com/Arij-Aladel/T5-Tasks) contains the reproducible code and this [folder](https://github.com/huggingface/transformers/issues/15734#issue-1144808323) contains the processed data and the checkpoints. To see the problem with self-attention masks in decoder refer please to this [notebook](https://github.com/Arij-Aladel/T5-Tasks/blob/main/T5-heatmap_MLM_test_128.ipynb) | 02-19-2022 18:01:41 | 02-19-2022 18:01:41 | @patrickvonplaten @patil-suraj @sgugger @stas00 <|||||>I have tracked the mask it was generated normally. I even tried to decrease the[ masking value ](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/modeling_utils.py#L307)from -10000 to -10000000000 . The issue is still. I do not know where is the problem, but apparently, it appears after summing the mask to position bias and scores and finally applying the softmax [here](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/models/t5/modeling_t5.py#L513). <|||||>Note: I thought it is an overfitting problem so I increased the number of layers gradually to be (4:4=encoder:decoder, 6:6=encoder: decoder). That mitigates this problem but the problem still exists.<|||||>HI @Arij-Aladel it would be awesome if you could post a short code snippet without much custom code here so we can reproduce easily. Also for general questions like this please use the [forum](https://discuss.huggingface.co/) as we issues for bug reports and feature requests. Thanks!<|||||>@patil-suraj Thanks for your reply. It is not a general question it is a very specific question related to self-attention masking in the T5 decoder. No need to verify my code. The issue in T5 masking in the attention in the decoder, it does not work properly and I showed an example in the [notebook](https://github.com/Arij-Aladel/T5-Tasks/blob/main/T5-heatmap_MLM_test_128.ipynb).<|||||>@patrickvonplaten @patil-suraj @sgugger @stas00
It is easy to reproduce the results using the repo all steps are listed to reproduce the results. Just I want to know the reason that weights in causal self attention in decoder is not masked properly.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>https://github.com/huggingface/transformers/commit/ece762443e4269e50cc1a348838e63e4a499575d this explains alot thanks for fixing the problem
|
transformers | 15,733 | closed | TokenizerFast.from_file() is stuck when loading a large tokenizer.json with tokens added and "pre-trained" starting from an existing, trained model | ## Environment info
- OS: Macos Monterey, v12.2.1, Macbook Pro M1; the issue also occurred when the program has been executed using Ubuntu 20.04 LTS, hosted both on Linode and Paperspace
- `transformers` version: 4.16.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.12
- PyTorch version (GPU?): 1.10.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@SaulLu
## Information
Model I am using (Bert, XLNet ...): Distilbert
The problem arises when using:
* the official example scripts - details below
The tasks I am working on is:
* my own task or dataset - details below
## To reproduce
1. grab the attached tar containing the pair of files `tokenizer_config.json` and `tokenizer.json` causing the issue - [tokenizer_pretrained_w_additional_tokens.tar.gz](https://github.com/huggingface/transformers/files/8102655/tokenizer_pretrained_w_additional_tokens.tar.gz)
2. extract the archive
2. just call `AutoTokenizer.from_pretrained(<folder where the archive has been extracted>)`
## Expected behavior
* What is expected: the tokenizer is loaded in a reasonable amount of time (say 1 min)
* What happens: the code is executed without runtime erros, but after several minutes (> 10min) the program is still running, with a CPU core 100% busy, no log is written to the console (please let me know if there is some switch that can enable debug logs)
* Notes:
* the last step that I have been able to follow via the PyCharm debugger is line 108 of the in use version of the `tokenization_utils_fast.py` file, that is just `fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)`
* from that point onwards PyCharm isn't able to "follow the code" - please note that I am NOT proficient in Python
* the tokenizer json files have been obtained by proceeding in the following way:
* I started from a [trained model for Italian available on the 🤗 Models Repo](https://huggingface.co/indigo-ai/BERTino)
* I have added a bunch of additional tokens to the tokenizer (as explained in the answers of [this Stackoverflow question](https://stackoverflow.com/q/71067376/445090)): note that the tokens are not "words", but let's say codes meaningful in my domain, and they amount to 125936
* I have executed an additional pre-training of the model with the new tokens: the training run successfully for 3 epochs with the default parameters of the [run_mlm.py script](https://raw.githubusercontent.com/huggingface/transformers/master/examples/pytorch/language-modeling/run_mlm.py)
* Then I moved the trained model with all its "ancillary" files to other machines, and executed the above code (with the ultimate aim to use the model+additional tokens for computing embeddings for the additional tokens)
Final note: it _might_ be ok for tokenizer to take so much time to be loaded, _but_ I would like to know:
* whether if it is actually ok or it is a bug
* if it is bug, what can we do in order to gather further information - or, if the bug is known, where it has been tracked
* if it is NOT a bug:
* why it's taking so long
* how much time we might estimate to take to complete the loading
* what we can do in order to speed up the thing - changes to the code, switch to some other type of tokenizer etc | 02-19-2022 17:51:28 | 02-19-2022 17:51:28 | Some more context: I figured out how to a) enable logs b) switch to the "slow" (i.e. Python only) tokenizer - it is as simple as:
```python
AutoTokenizer.from_pretrained(…, use_fast=False, verbose=True)
```
That way:
* its now clear what's happening when the CPU runs at 100% for so long time (I expect that the behaviour of the "fast", Rust-based tokenizer is not dissimilar): it's just building a trie for fast tokens lookup
* I do not know whether this can be done in a more efficient way, but, currently, it's 50' that is running and it added 18675 tokens out of a total of 125936: so, it should take approx. 5 hours to load the tokenizer… How much faster it is supposed to be the "fast" tokenizer?
@SaulLu in conclusion I do think that this is issue is NOT a bug. Do you think is there some way to speed up the whole thing (apart, of course, trimming my vocabulary)? Maybe is there some mean to build the trie just once and then load it from some cache in subsequent executions (be it the slow _or_ the fast tokenizer)?<|||||>I add another bit: it's unclear to me why this issue did not arise during the pre-training. Maybe the tokenizer used during the training phase does not use the trie? Sounds a bit weird…<|||||>_Maybe_ I found a thing: after the call of `AutoTokenizer.from_pretrained()`, [in this point](https://github.com/huggingface/transformers/blob/f87db5e412f561c089e70ab2b15bfa070c2b07f5/src/transformers/tokenization_utils_base.py#L1964) of `tokenization_utils_base.py`, the code loops through each additional token: so the trie is _re-built from scratch_ for every and each token (see [here](https://github.com/huggingface/transformers/blob/f87db5e412f561c089e70ab2b15bfa070c2b07f5/src/transformers/tokenization_utils.py#L445)) - while, on the contrary, it should have been built just once with _all_ the additional tokens (that it is what I do think happen when doing the pre-training).
@SaulLu do you think I am on the right track? Do you an idea of why the trie is rebuilt for every and each additional token?<|||||>I did a "quick&dirty" patch to the code, and it seems to work: I have removed the code from [this line](https://github.com/huggingface/transformers/blob/f87db5e412f561c089e70ab2b15bfa070c2b07f5/src/transformers/tokenization_utils_base.py#L1947) to this [other line](https://github.com/huggingface/transformers/blob/f87db5e412f561c089e70ab2b15bfa070c2b07f5/src/transformers/tokenization_utils_base.py#L1964) (from my understanding, this is a kind of check that the tokenizer status is "compatible" with the addition of new tokens), and replaced it with this line:
```python
tokenizer.add_tokens([token_sorted[0] for token_sorted in added_tok_encoder_sorted])
```
That way all the 125936 tokens are added in just 121 seconds (on the basis of what I have observed during the pre-training, I do think that the "fast" tokenizer would be way faster), and the rest of the code (loading the model and computing the embeddings) seem to run flawlessy (of course I will double check it later on).
@SaulLu does this patch make sense to you too? Do you think is it possible to create a proper patch and apply it to the fast tokenizer as well? <|||||>As a side note, I noticed that the additional tokens are written in the `tokenizer.json` file with a trailing new line character, as in
```json
{
"id": 31102,
"special": false,
"content": "urnexpltokenslegalref0\n",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": true
}
```
@SaulLu is this an expected (and harmless) behaviour, or a bug (for which, in case, I will open a different issue, of course)?<|||||>> @SaulLu does this patch make sense to you too? Do you think is it possible to create a proper patch and apply it to the fast tokenizer as well?
I have filed [an issue to in the github repo of the "fast" tokenizers](https://github.com/huggingface/tokenizers/issues/914) and I working on that now (and found another related pitfall also on the transformers side).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,732 | closed | Deberta v2 code simplification | # What does this PR do?
This PR simplifies and fixes the code for the DeBERTa V2 disentangled attention bias calculation:
1. Removes a spurious substraction of the type `x - x` always resulting in 0
2. Fixes condition checking for the attention type and attention score calculation. In the current version, the additional check to see if `p2p` is in the attention type is performed at the wrong position (https://github.com/huggingface/transformers/blob/2c2a31ffbcfe03339b1721348781aac4fc05bc5e/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L781). The `p2c` attention is not used for `p2p`, but `c2p` is. Currently the execution would fail if the attention types include `p2p` but not `c2p`: the c2p_pos variable used https://github.com/huggingface/transformers/blob/2c2a31ffbcfe03339b1721348781aac4fc05bc5e/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L813 would not be defined at https://github.com/huggingface/transformers/blob/2c2a31ffbcfe03339b1721348781aac4fc05bc5e/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L772. This PR moves the check for the `p2p` attention flag to the right position and simplifies the `p2c` attention calculation.
## Who can review?
@LysandreJik
@BigBird01 | 02-19-2022 09:15:42 | 02-19-2022 09:15:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15732). All of your documentation changes will be reflected on that endpoint.<|||||>Hello @LysandreJik , @BigBird01 ,
As discussed a couple of weeks back I wanted to reach out to see if any further work would be required on this PR.
Thank you!<|||||>LGTM, also pinging @anton-l as it affects SEW/SEW-D<|||||>Hello @LysandreJik , @anton-l
I was wondering if there is further work required on this PR?
Thank you!<|||||>Hi @guillaume-be, no further changes required from my side (SEW-D models), feel free to merge if everything else is ok 🙂 <|||||>Thanks for your PR, merging! |
transformers | 15,731 | closed | 🧼 NLP task guides | A clean commit of the NLP task guides #15564 😭 | 02-18-2022 22:42:25 | 02-18-2022 22:42:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,730 | closed | Add layer_idx to CrossAttention of GPT2 model | When I tried `EncoderDecoderModel` with GPT2 model + `_upcast_and_reordered_attn`, cross attention layer did not have `layer_idx`, so the following error occured.
```
t2.py", line 240, in _upcast_and_reordered_attn
scale_factor /= float(self.layer_idx + 1)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
```
cc @patrickvonplaten @LysandreJik
| 02-18-2022 21:57:28 | 02-18-2022 21:57:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looks good, thanks!
Once being approved by patrickvonplaten/LysandreJik/sgugger , could you also apply the same change to `modeling_imagegpt.py`, please:
https://github.com/huggingface/transformers/blob/2c2a31ffbcfe03339b1721348781aac4fc05bc5e/src/transformers/models/imagegpt/modeling_imagegpt.py#L426<|||||>@ydshieh Sure. <|||||>Thanks! |
transformers | 15,729 | closed | [Test refactor 5/5] Build docker images | These docker images will be built on a daily basis to be re-used by the CI.
Additional docker images will be built for Torch versions in [1.4-1.10] and TensorFlow versions in [2.3-2.7]. | 02-18-2022 21:51:13 | 02-18-2022 21:51:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,728 | closed | [Test refactor 4/5] Improve the scheduled tests | The scheduled tests now leverage a docker container pre-built two hours earlier than this run by the CI.
It generates a job for each folder in the `tests` folder. | 02-18-2022 21:51:08 | 02-18-2022 21:51:08 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Very cool!<|||||>Looks good to me! Also great that all `generation` tests from `test_generation_utils.py` are run! The errors:
```
ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.8/dist-packages/itsdangerous/__init__.py)
```
will be fixed once the previous PRs are merged?<|||||>> Looks good to me! Also great that all `generation` tests from `test_generation_utils.py` are run! The errors:
>
> ```
> ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.8/dist-packages/itsdangerous/__init__.py)
> ```
>
> will be fixed once the previous PRs are merged?
Actually encountered this problem locally now as well -> think we need to pin the `itsdangerous` library: https://serverfault.com/questions/1094062/error-from-itsdangerous-import-json-as-json-importerror-cannot-import-name-j |
transformers | 15,727 | closed | [Test refactor 3/5] Notification service improvement | This improves the notification service in Slack to provide a more comprehensive view over the tests.
Previously:
<img width="493" alt="image" src="https://user-images.githubusercontent.com/30755778/154765757-8ea3f016-b2df-471b-b161-2a31e67effe2.png">
Now:
<img width="614" alt="image" src="https://user-images.githubusercontent.com/30755778/154765783-dc62a4ad-d85b-42b6-af1b-b9e3914a2b4e.png">
To do before merge:
- Update the slack channel reference so that it posts to the right channel
- Ensure compatibility with the `push` job that runs on each commit to `master` | 02-18-2022 21:51:04 | 02-18-2022 21:51:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Lysandre, you're still summing up one+many on top of the new preview (categories) - this is confusing/misleading since most of the time it's the same error that happens on both setups and shouldn't be counted twice.
and why not do the exact same format for the categories? it'd be much easier to read if it's the same format for all groups and sub-groups, no?
and I'd still format it as:
```
1 | 2 | name1
5 | 6 | name2
```
which is much less noisier, but then I don't know how you format it - you said it was complicated.
if the header isn't too much work to add then:
```
1 | 2 | title
--|---|-------
1 | 2 | name1
5 | 6 | name2
```
<|||||>Formatting it as you offer isn't complicated, it was using pandas that I wasn't too keen on. I personally prefer the format above but it's probably because I have seen it too much by now :) Let's hear what others think.
And indeed, will update the categories at the top.<|||||>The format isn't too important to me - I'll get used to either way quickly I think :-)
If I would have to choose, I'd prefer @stas00 solution though as well - would be nice to include the "single-gpu | multi-gpu" header every time <|||||>I have updated the notification service to output the following format:
<img width="635" alt="image" src="https://user-images.githubusercontent.com/30755778/155244600-eb95b042-9f3a-4282-b505-16468f93fa68.png">
|
transformers | 15,726 | closed | [Test refactor 2/5] Tests fetcher | This PR adapts the tests fetcher and the `check_repo` script to behave nicely with the subfolders. | 02-18-2022 21:50:59 | 02-18-2022 21:50:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,725 | closed | [Test refactor 1/5] Per-folder tests reorganization | This PR separates the test files in independant folders so that testing may be broken down in smaller parts. The test files for each model are moved in their corresponding folder (e.g., `bert`, `bart`, `plbart`, ...), and others are moved to descriptive directories so that no test file is at the root of the test folder except files to be reused in other test files (mixins, common tests).
Such directories are:
- `benchmark`
- `deepspeed`
- `extended`
- `fixtures`
- `generation`
- `onnx`
- `optimization`
- `pipelines`
- `sagemaker`
- `trainer`
- `utils`
Additionally, this PR does the following:
- It moves the `conftest.py` file so that it is at the root of the `transformers` repository. This is because we're expecting it to be shared across the `tests` folder and the `examples` folder as both make use of the `make-reports` utility. However, it cannot be done if `conftest.py` is in the `tests` folder as `examples` is not in its tree/children trees, as `conftest.py` is a [per-directory plugin](https://docs.pytest.org/en/latest/how-to/writing_plugins.html#conftest-py-local-per-directory-plugins).
- The files `test_examples.py` are renamed to contain the name of the framework within them as [no two files may have the same name in a `pytest` run, even if these are in held in different folders](https://github.com/pytest-dev/pytest/issues/3151).
- The `TF_FORCE_GPU_ALLOW_GROWTH` variable is set to `1` in the test so as to prevent TensorFlow from taking up the entire GPU memory. TensorFlow and PyTorch tests may run alongside each other in this setup.
- Each folder is now a submodule, as most of them require access to files that are only in the parent folder. Having each folder be a submodule (having an `__init__.py` file) allows that. | 02-18-2022 21:50:51 | 02-18-2022 21:50:51 | conftest needs edit if you are moving it, this part:
```
git_repo_path = abspath(join(dirname(dirname(__file__)), "src"))
sys.path.insert(1, git_repo_path)
```
it is looking for the root dir there.
the new code should be:
```
git_repo_path = abspath(join(dirname(__file__), "src"))
sys.path.insert(1, git_repo_path)
```
I have just tested that it still works if moved outside of `tests`. I didn't know it did. Cool.
It's a bit an unintuitive placement but if you're saying it has to move it has to move then.
<|||||>I noticed a few days ago our transformers + DS integration tests were failing in our CI. I bisected the failure to this PR.
https://github.com/microsoft/DeepSpeed/runs/5365118440?check_suite_focus=true
I am able to quickly reproduce the error with this single test, but it happens with several of them: `TORCH_EXTENSIONS_DIR=./torch-extensions RUN_SLOW=1 pytest -s --color=yes --durations=0 --verbose tests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_clm_0_zero2`

I think the python environment in the sub-process is getting messed up? @stas00 I know you are super busy with big-science right now but when you have a moment can you take a look?
<|||||>Thank you for the heads up, Jeff.
Our CI is getting a complete revamp to support ever-growing number of models and I think the deepspeed tests were left behind as I wasn't quite around.
I can reproduce the problem, let me see how I can fix it.
<|||||>Thanks Stas, that's great. I have done some further digging and i think i know the core of the issue. I think there's a name conflict between the `deepspeed` folder at `transformers/tests/deepspeed` and the real `deepspeed` package. I changed the test slightly to run my own test file and printed the deepspeed path and see this:
```
Running: python /root/foo.py
stdout: <module 'deepspeed' from '/workspace/DeepSpeed/transformers/tests/deepspeed/__init__.py'>
```
the contents of foo.py are:
```python
import deepspeed
print(deepspeed)
```
The correct path of `deepspeed` on this box if you run `foo.py` outside this test environment is:
```
<module 'deepspeed' from '/opt/conda/lib/python3.7/site-packages/deepspeed/__init__.py'>
```<|||||>Yes, something has changed in the paths.
I can run the training command just fine alone from the console, but it fails when run from inside `execute_subprocess_async`
I'm on top of it, Jeff. Will report back as soon as I have something working.<|||||>I think it's because `tests/deepspeed/__init__.py` was added. now it thinks it's the package.<|||||>I think there are a lot more issues there to fix as files got moved around, but the first one should get fixed by
https://github.com/huggingface/transformers/pull/15881
I will see what other problems are meanwhile.<|||||>OK, I think I got them all fixed. At least they all now pass on my machine.
I will merge as soon as CI has completed, then we can re-check your side.<|||||>The CI complains about something unrelated to my PR, but I don't want to break master, so waiting for someone who knows to review. In interim we can merge this if it works: https://github.com/microsoft/DeepSpeed/pull/1803 |
transformers | 15,724 | closed | Need more understanding of the function: get_visual_embeddings(image_path) | Dear Developers
I was looking through the code for image embedding and found this section of the code in the visual_bert:
# this is a custom function that returns the visual embeddings given the image path
visual_embeds = get_visual_embeddings(image_path)
This is a user-defined function. But I want to know what are the existing imaging embedding process should work with visual_bert?
Some suggestions will be helpful.
Thanks
Dheeman | 02-18-2022 18:16:39 | 02-18-2022 18:16:39 | Hey @dheeman00!
You should check out the following complete example for a fully working pipeline with VisualBERT: https://github.com/huggingface/transformers/tree/master/examples/research_projects/visual_bert
cc @gchhablani<|||||>Hey @LysandreJik
I looked into the repo. But I think I found a better example that can be helpful for others.
https://huggingface.co/docs/transformers/model_doc/deit
deit directly represents a vector from a given image. <|||||>cc @NielsRogge too<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,723 | closed | [WIP] [doc] performance/scalability revamp | moved from https://github.com/huggingface/transformers/pull/15213 so that we get the doc generator working
XXX: The previous PR has comments/suggestions that need to be integrated here
-----------------
@lvwerra and I are working on a massive performance/scalability docs revamp:
So the rough plan is to make custom plans for each of the combinations `[inference|training] * [1 gpu|many gpus|cpu]` so that it's very easy for the user to follow the instructions that are specific to their needs.
So the proposed doc layout is:
* performance.mdx (main entry point)
* perf_infer.mdx
- perf_infer_cpu.mdx
- perf_infer_gpu_many.mdx
- perf_infer_gpu_one.mdx
* perf_train.mdx
- perf_train_gpu_many.mdx
- perf_train_gpu_one.mdx
* scalability.mdx (rename from parallelism.mdx) (XXX: to do)
See the PR's changes for a rough layout of the content.
One big question is this: At the moment everything is pytorch-centric, as we don't have any info on tf/flax. Down the road we will either inject tf/flax-specific instructions into the current docs, or perhaps it'd be better to have dedicated docs for pt/tf/flax. It'd help a lot to decide ahead of time to avoid document renaming and potentially breaking links. If we plan to have these PT-specific perhaps let's embed `_pt` in the filenames?
@lvwerra
| 02-18-2022 17:36:35 | 02-18-2022 17:36:35 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15723). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @stas00, first stab at the single GPU section. The text is still WIP but you could have look at the sections and rough content to see if you agree.
Looking at the docs as a whole again here a few thoughts:
- Instead of dividing the whole section into speed and memory related techniques I thought an alternative could be to have a summary table at the beginning or end where the memory and speed impact of all methods is described. Could also help a user navigate the doc better.
- In the previous PR we discussed where to discuss common concepts (first time they occurred vs. in the performance docs). I am currently leaning towards the former. The main thing that's currently in the `performance.mdx` and I am not sure, yet, where I would put it is the hardware discussion which is quite different to everything else we are doing which makes me think that maybe we can create a dedicated document `hardware.mdx` where we could also expand a bit to other accelerators (TPU/IPUs for example).
- If we take that road the `performance.mdx` doc is again quite empty and I think we could move the content from `perf_infer.mdx` and `perf_train.mdx` to this document and then have a shared entry point and remove `perf_infer.mdx` and `perf_train.mdx`.
If you are happy with the structure I could start outlining the `performance.mdx` and polishing the single-GPU section while you could for example work on the multi-GPU training content.
What do you think?<|||||>Thank you for taking a stab at redesign, @lvwerra
It's very clear that we have fundamentally conflicting visions of how the performance documentation should be laid out. So it'd be wasteful to spend energy to try to come up with something that resonates with both of us.
I don't think this project can have 2 masterminds, and since I'm currently busy with BigScience training and we don't want this to fall between the cracks, I propose that if this works for you, please take over and proceed in an unimpeded way that sounds good to you, @lvwerra.
Once you have completed the restructuring and added what you know and you want me to fill in some gaps that I hopefully know to fill in please let me know and I will do so. or alternatively do tell me what sections are missing and I will work on those in free format and you can then adapt them to the structure you end up choosing.
<|||||>Hi @LysandreJik and @sgugger,
I revamped the documentation a bit and reduced the scope from all the documents @stas00 lined out to just the `performance.mdx` and the training on single/multi-GPU so we can converge on those before adding the others, which also prevents a monster PR. For some reason the preview of the docs still does not work - if you have any idea, let me know.
To guide you a bit in the review:
- `performance.mdx` should be the entry point and give an overview
- `perf_train_gpu_single.mdx` shows all the tricks to efficiently train large models on a single GPU (gradient accumulation/checkpointing, optimizers, DeepSpeed Zero, data loaders etc.)
- `perf_train_gpu_single.mdx` shows all the tricks to efficiently train large models on a single GPU (mostly parallelism strategies: data, tensor, pipeline parallelism)
- `perf_hardware.mdx` is where I put some tips and tricks about custom hardware setups
In subsequent PRs we can:
- add more TensorFlow examples
- add TPU training guide
- add inference side
- add guides for train/inference on specialized hardware (Optimum)
This is the plan for the full documentation:
<img width="1177" alt="perf_overview" src="https://user-images.githubusercontent.com/8264887/163152248-1565d6bd-2e2f-4626-897b-5a1d67f6116b.png">
Let me know what you think about style and content of the documentation. There are still a few rough edges but I think it is good for a first review. <|||||>The preview can't work if you don't rebase on master to have the new doc structure (everything nested in an "en" subfolder)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @sgugger it worked!<|||||>@lvwerra, I rebased and solved the conflict introduced by https://github.com/huggingface/transformers/pull/16860 - fixing it at its new file the content was moved to.<|||||>@stas00 thank you!
@sgugger @LysandreJik would you like to take a look at the PR? I integrated the previous suggestion.<|||||>@sgugger are you happy to merge this?<|||||>Yes, LGTM :-) |
transformers | 15,722 | closed | Revert changes in logit size for semantic segmentation models | # What does this PR do?
This PR reverts the changes made in the size of the returned logits of semantic segmentation models made in #15469 because it leads in some situations to two consecutive resizes:
- one to resize the output to the same size of the input
- one to resize the output to the size of the original image
Two different interpolations with nearest neighbors is a bad idea and users will get a best result by just doing one interpolation to get the segmentation mask to the size they want.
This restores backward compatibility, but still introduces a new argument `resize_logits` for situations where the user wants the logits the same size as the `pixel_values` passed to the model. | 02-18-2022 15:32:24 | 02-18-2022 15:32:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,721 | closed | Add missing PLBart entry in README | # What does this PR do?
This PR fixes the missing entry for PLBart (See #13269) in the index.
@sgugger @patil-suraj @patrickvonplaten @LysandreJik | 02-18-2022 15:24:24 | 02-18-2022 15:24:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I don't see the change in the main README. Also it's not part of the doc, so you have to use the full link to the master doc. |
transformers | 15,720 | closed | Drop support for Python 3.6 | Drop support for Python 3.6 as it is EOL | 02-18-2022 14:10:49 | 02-18-2022 14:10:49 | Why deliberately drop support? If people are stuck on 3.6 for whatever reason it will be nice for them to be able to use it as long as it lasts.<|||||>Python 3.6 is EOL now. Do we still want to retain support for it?<|||||>> Why deliberately drop support? If people are stuck on 3.6 for whatever reason it will be nice for them to be able to use it as long as it lasts.
But this will also make the development & maintenance of the library harder. Many libraries drop support for Python 3.6.<|||||>Thanks for your request! As long as there isn't a very impactful feature missing or compatibility issue with our current setup, we're unlikely to drop support for Python 3.6 just yet as we aim for maximum compatibility across versions.<|||||>Thanks @LysandreJik for your reply on this. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Python 3.6 will be dropped over the next few weeks: https://github.com/huggingface/transformers/issues/16832<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This has been addressed. |
transformers | 15,719 | closed | style_doc handles decorators in examples | # What does this PR do?
This fixes the issue encountered in #15564 and make the `style_doc` util handle the decorators properly. | 02-18-2022 13:37:07 | 02-18-2022 13:37:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,718 | closed | Fix SiluActivation | # What does this PR do?
This PR fixes the `SiLUActivation` class which was missing a call to the super init. It also removed unnecessary inits that just call the superclass. | 02-18-2022 10:25:08 | 02-18-2022 10:25:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,717 | closed | revert temporary addition to test next version of CLIPTokenizerFast | # What does this PR do?
This PR aims to change the repository used to test `CLIPTokenizerFast`.
As part of the `CLIPTokenizerFast` bug fix in this PR #15067, instead of testing the tokenizer of the repo [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) we had made a temporary change to test the one in [SaulLu/clip-vit-base-patch32](https://huggingface.co/SaulLu/clip-vit-base-patch32) that had the new version of the `tokenizer.json` file.
As discussed, the `tokenizer.json` file should be transferred from the [SaulLu/clip-vit-base-patch32](https://huggingface.co/SaulLu/clip-vit-base-patch32) repo to the [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) repo. Once this is done the testing of this PR should pass.
For the record, it was discussed to introduce file versioning and therefore to add a versioned file `tokenizer_vxxx.json` in the [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) repo. Following the last exchanges this solution has been put aside because 1) the fast version of the tokenizer is not used in `CLIPFeatureExtractor` and 2) it is a bug fix.
**Edit:** the new `ŧokenizer.json` have been added to [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) and now all the tests pass!
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-18-2022 09:56:06 | 02-18-2022 09:56:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@SaulLu We should also update this for other CLIP checkpoints too, no ?
https://huggingface.co/openai/clip-vit-base-patch16
https://huggingface.co/openai/clip-vit-large-patch14<|||||>Yes! I didn't know if the tokenizer are exactly the same with `[clip-vit-base-patch32](https://huggingface.co/SaulLu/clip-vit-base-patch32)` if that's not the case, we need to build them from the slow versions.
I think we need to check all the tokenizer that are supposed to be of `ClipTokenizer` type on the Hub too<|||||>@patil-suraj , I've added the new `tokenizer.json` file to https://huggingface.co/openai/clip-vit-base-patch16 and to https://huggingface.co/openai/clip-vit-large-patch14 :slightly_smiling_face: <|||||>Thanks a lot @SaulLu ! |
transformers | 15,716 | closed | Converting mBART to ONNX format | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.2
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.2+cu102 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
Models:
- MBART: @patil-suraj
## To reproduce
I'm trying to export mBART50 to ONNX on a Colab Notebook.
```
pip install transformers[onnx] sentencepiece -q
pip install torch --upgrade -q
python -m transformers.onnx --model=facebook/mbart-large-50 --feature seq2seq-lm-with-past onnx/
```
I'm getting the following error:
> Downloading: 100% 531/531 [00:00<00:00, 496kB/s]
> Downloading: 100% 1.38k/1.38k [00:00<00:00, 1.20MB/s]
> Downloading: 100% 4.83M/4.83M [00:00<00:00, 48.2MB/s]
> Downloading: 100% 649/649 [00:00<00:00, 456kB/s]
> Downloading: 100% 2.28G/2.28G [00:47<00:00, 51.6MB/s]
> Using framework PyTorch: 1.10.2+cu102
> Overriding 1 configuration item(s)
> - use_cache -> True
> /usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py:90: UserWarning: 'enable_onnx_checker' is deprecated and ignored. It will be removed in the next PyTorch release. To proceed despite ONNX checker failures, catch torch.onnx.ONNXCheckerError.
> warnings.warn("'enable_onnx_checker' is deprecated and ignored. It will be removed in "
> /usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py:103: UserWarning: `use_external_data_format' is deprecated and ignored. Will be removed in next PyTorch release. The code will work as it is False if models are not larger than 2GB, Otherwise set to False because of size limits imposed by Protocol Buffers.
> warnings.warn("`use_external_data_format' is deprecated and ignored. Will be removed in next "
> /usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_mbart.py:223: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
> /usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_mbart.py:229: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> if attention_mask.size() != (bsz, 1, tgt_len, src_len):
> /usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_mbart.py:260: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
> /usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_mbart.py:889: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> if input_shape[-1] > 1:
> /usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_mbart.py:91: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> if past_key_values_length > 0:
> Validating ONNX model...
> tcmalloc: large alloc 1073741824 bytes == 0x563a06b34000 @ 0x7f3d33962b6b 0x7f3d33982379 0x7f3c45cf91dc 0x7f3c461c75a4 0x7f3c4603d385 0x7f3c4629e225 0x7f3c4602fd99 0x7f3c46027e76 0x7f3c45cc088e 0x7f3c45c762b8 0x7f3c45c41252 0x56388cae6f78 0x56388cb5aa6d 0x56388cb5502f 0x56388cae7aba 0x56388cb56108 0x56388cb5502f 0x56388cae836c 0x56388cb297b9 0x56388cb266d4 0x56388cae6c29 0x56388cb5ae61 0x56388cae79da 0x56388cb55eae 0x56388cae79da 0x56388cb55eae 0x56388cb5502f 0x56388cb54d43 0x56388cb531b0 0x56388cae6229 0x56388cae6120
> tcmalloc: large alloc 2147483648 bytes == 0x563a5c1dc000 @ 0x7f3d33962b6b 0x7f3d33982379 0x7f3c45cf91dc 0x7f3c461c75a4 0x7f3c4603d385 0x7f3c4629e225 0x7f3c4602fd99 0x7f3c46027e76 0x7f3c45cc088e 0x7f3c45c762b8 0x7f3c45c41252 0x56388cae6f78 0x56388cb5aa6d 0x56388cb5502f 0x56388cae7aba 0x56388cb56108 0x56388cb5502f 0x56388cae836c 0x56388cb297b9 0x56388cb266d4 0x56388cae6c29 0x56388cb5ae61 0x56388cae79da 0x56388cb55eae 0x56388cae79da 0x56388cb55eae 0x56388cb5502f 0x56388cb54d43 0x56388cb531b0 0x56388cae6229 0x56388cae6120
> -[✓] ONNX model output names match reference model ({'present.5.encoder.key', 'present.4.decoder.key', 'present.6.decoder.value', 'present.0.decoder.value', 'present.8.decoder.value', 'present.11.encoder.key', 'present.0.encoder.key', 'present.11.encoder.value', 'present.3.encoder.key', 'present.0.encoder.value', 'present.7.decoder.key', 'present.5.encoder.value', 'present.0.decoder.key', 'present.4.encoder.key', 'present.2.decoder.value', 'present.10.encoder.key', 'present.3.decoder.value', 'present.11.decoder.value', 'present.6.encoder.value', 'present.5.decoder.key', 'present.8.encoder.value', 'present.10.decoder.value', 'present.5.decoder.value', 'present.7.decoder.value', 'present.2.encoder.key', 'present.10.encoder.value', 'present.1.decoder.key', 'present.9.decoder.value', 'present.9.decoder.key', 'present.10.decoder.key', 'logits', 'present.2.encoder.value', 'present.4.decoder.value', 'present.8.decoder.key', 'present.1.encoder.key', 'present.9.encoder.value', 'present.1.decoder.value', 'present.4.encoder.value', 'present.11.decoder.key', 'present.1.encoder.value', 'present.6.encoder.key', 'present.3.encoder.value', 'present.7.encoder.value', 'present.8.encoder.key', 'present.2.decoder.key', 'present.3.decoder.key', 'present.7.encoder.key', 'present.6.decoder.key', 'present.9.encoder.key'})
> - Validating ONNX Model output "logits":
> -[✓] (2, 2, 250054) matches (2, 2, 250054)
> -[x] values not close enough (atol: 1e-05)
> Traceback (most recent call last):
> File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
> "__main__", mod_spec)
> File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
> exec(code, run_globals)
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 77, in <module>
> main()
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 70, in main
> validate_model_outputs(onnx_config, tokenizer, model, args.output, onnx_outputs, args.atol)
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 230, in validate_model_outputs
> "Outputs values doesn't match between reference model and ONNX exported model: "
> ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 3.5762786865234375e-05
Is there a workaround/fix?
| 02-18-2022 09:34:15 | 02-18-2022 09:34:15 | cc @lewtun <|||||>Hey @rumeshmadhusanka thanks for raising this issue! The error indicates that the numerical agreement between the raw and ONNX models is larger than ~3.6e-5.
We try to pick good defaults in the library, but in some cases you get this disagreement. You can tweak the required agreement by setting the `--atol` argument in the CLI, e.g.
```bash
python -m transformers.onnx --model=facebook/mbart-large-50 --feature seq2seq-lm-with-past --atol=5e-5 onnx/
```
We find that agreement in the range 1e-5 to 1e-3 is generally fine - hope that helps! |
transformers | 15,715 | closed | Trainer can't estimate SpeechEncoderDecoder tokens for `floating_point_ops` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.2
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): 1.9
- Tensorflow version (GPU?): NA
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: N
### Who can help
Maybe @sgugger?
## Information
Model I am using (Bert, XLNet ...): SpeechEncoderDecoder
Training the model results in `Could not estimate the number of tokens of the input, floating-point operations will not be computed` warnings due to [this function](https://github.com/huggingface/transformers/blob/240cc6cbdc07fc1a7c103d71cfe38d5e60e489e4/src/transformers/modeling_utils.py#L378) being unable to find the correct `self.main_input_name`. For `SpeechEncoderDecoderModel`s, the default name is [`inputs` ](https://github.com/huggingface/transformers/blob/v4.16.2/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L183) even though the encoder should either accept `input_values` or `input_features`. The function expects `inputs` to be in the `input_dict` but could only find the main input name of the encoder.
## To reproduce
Steps to reproduce the behavior:
1. Train a `SpeechEncoderDecoderModel` using the `Seq2SeqTrainer` and monitor console output.
## Expected behavior
The function should be able to determine the correct `main_input_name` of a `SpeechEncoderDecoderModel`. Presumably that of the encoder. I'm not sure if this applies to `VisionEncoderDecoderModel`s but I suspect it would.
| 02-18-2022 09:15:34 | 02-18-2022 09:15:34 | I don't think this can be solved since the model input name is not always the same. You can just ignore the warning.
cc @patrickvonplaten for the model input name, to see if there is a way to fix it.<|||||>@OllieBroadhurst,
I think we could solve this if we rename `input_values` to `inputs` in the speech-encoder-decoder training script here: https://github.com/huggingface/transformers/blob/2f2fefd6afa496ea72312ff38686706717f8aa22/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py#L197
Could you try this out? Think this is the cleanest way of solving it<|||||>cc @sanchit-gandhi <|||||>Nice idea @patrickvonplaten! I have my own collate function anyway so it's a simple change. Of course you'll be changing `input_features` to `inputs` if you're using `Speech2Text` (as opposed to `input_values`).
|
transformers | 15,714 | closed | Misplaced sentence in https://api-inference.huggingface.co/docs/curl/html/detailed_parameters.html#question-answering-task | "a string to be translated in the original languages" appears in the description of the question answering documentation. | 02-18-2022 08:33:15 | 02-18-2022 08:33:15 | Thank you for flagging this, @ToonTalk. I've forwarded the issue internally and I will let you know of the outcome :)<|||||>Thank you for reporting this ! @ToonTalk <|||||>(Fixed in the docs) |
transformers | 15,713 | open | add image2text generation | add image2text generation | 02-18-2022 06:14:00 | 02-18-2022 06:14:00 | Have you tried [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder)? |
transformers | 15,712 | closed | Error while finetuning XLM-R on Tensorflow-Keras | Hello,
I am trying to finetune XLM-RoBERTa for text classification on tensorflow-keras. I have used XLMRobertaTokenizer base to tokenize the inputs. vocab_size in config file for XLM-R model is 250002 and for Roberta it is 50265. But while training I am getting below error. Can anyone help me to solve this?
I am using google colab GPU for finetuning. Tensoflow version is 2.7.0.
**InvalidArgumentError: indices[2,268] = 124030 is not in [0, 50265) [[node tf_roberta_for_sequence_classification_1/roberta/embeddings/Gather (defined at /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_tf_roberta.py:149) ]] [Op:__inference_train_function_82886]
Errors may have originated from an input operation. Input Source operations connected to node tf_roberta_for_sequence_classification_1/roberta/embeddings/Gather: In[0] tf_roberta_for_sequence_classification_1/roberta/embeddings/Gather/resource:
In[1] IteratorGetNext (defined at /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:866)** | 02-18-2022 05:54:32 | 02-18-2022 05:54:32 | Hi @Komalrathod55 👋 Can you share the entire script (or give me a link to your colab), so I can have the entire context to check your issue? Thank you :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,711 | closed | Fix `HfDeepSpeedConfig` argument in `Trainer` | # What does this PR do?
`HfDeepSpeedConfig` accepts a dictionary or a path to `.json` file containing DeepSpeed configurations. However, `self.args` is a `TrainingArgument` object. This PR replaces `self.args` with `self.args.deepspeed`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00 | 02-18-2022 03:55:13 | 02-18-2022 03:55:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>With DS please remember the live CI doesn't run any of DS tests as it doesn't have GPUs. So they only run nightly, so it's best to run the tests locally if you have at least one gpu.
```
RUN_SLOW=1 pytest tests/deepspeed
```
I did run the tests and they all pass, so all is good. (Clearly this feature hasn't been tested, but as I said I can't think of how it could be tested in this particular situation).<|||||>Thank you for the comments and review, @stas00. It makes sense that the CI doesn't run DeepSpeed tests every time.
I can't think of a way to test this right off the bat either, but I'll open an issue and/or a PR if I get any ideas while integrating OSLO! |
transformers | 15,710 | closed | No self-hosted | Removes the need to have a self-hosted machine to build the dev documentation. | 02-18-2022 00:55:16 | 02-18-2022 00:55:16 | @coyotte508 I'm getting this error, do you have an idea where it might come from?
```
Installing node dependencies
Building HTML files. This will take a while :-)
Traceback (most recent call last):
File "/usr/local/bin/doc-builder", line 33, in <module>
sys.exit(load_entry_point('doc-builder', 'console_scripts', 'doc-builder')())
File "/__w/transformers/transformers/doc-builder/src/doc_builder/commands/doc_builder_cli.py", line 39, in main
args.func(args)
File "/__w/transformers/transformers/doc-builder/src/doc_builder/commands/build.py", line 145, in build_command
subprocess.run(
File "/usr/local/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['npm', 'run', 'build']' returned non-zero exit status 134.
```<|||||>> @coyotte508 I'm getting this error, do you have an idea where it might come from?
>
> ```
> Installing node dependencies
> Building HTML files. This will take a while :-)
> Traceback (most recent call last):
> File "/usr/local/bin/doc-builder", line 33, in <module>
> sys.exit(load_entry_point('doc-builder', 'console_scripts', 'doc-builder')())
> File "/__w/transformers/transformers/doc-builder/src/doc_builder/commands/doc_builder_cli.py", line 39, in main
> args.func(args)
> File "/__w/transformers/transformers/doc-builder/src/doc_builder/commands/build.py", line 145, in build_command
> subprocess.run(
> File "/usr/local/lib/python3.8/subprocess.py", line 516, in run
> raise CalledProcessError(retcode, process.args,
> subprocess.CalledProcessError: Command '['npm', 'run', 'build']' returned non-zero exit status 134.
> ```
Yes, out of memory
Settings the env `NODE_OPTIONS` as in https://github.com/huggingface/transformers/blob/master/.github/workflows/build_documentation.yml#L93 should help<|||||>@sgugger maybe there's a way to get the npm error output with `doc-builder`?
Here the dev documentation build seems to fail on `npm ci` but we don't get more details<|||||>@coyotte508 if you comment out [these lines](https://github.com/huggingface/doc-builder/blob/3153b60a541532daf218a976199004776699553c/src/doc_builder/commands/build.py#L147-L148), you'll see all the logs (including warning, which generates a lot of noise, maybe we can turn off wanrings from svletekit side)<|||||>build-dev-documentation didn't seem to run for that last commit (maybe due to the PR conflict? =/)<|||||>It ran, and it worked! Thank you @mishig25 and @coyotte508 for your help. |
transformers | 15,709 | closed | [WIP] No self-hosted | null | 02-18-2022 00:38:43 | 02-18-2022 00:38:43 | |
transformers | 15,708 | closed | [WIP] No self-hosted | null | 02-18-2022 00:31:05 | 02-18-2022 00:31:05 | |
transformers | 15,707 | closed | [WIP] No self-hosted | null | 02-18-2022 00:21:00 | 02-18-2022 00:21:00 | |
transformers | 15,706 | closed | Fix auto model tests | Fixes some tests with auto models: the h5py version check should not be a requirement and the error should be reflected by our tools failing our tests rather than a test of the absolute version. | 02-17-2022 22:32:06 | 02-17-2022 22:32:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,705 | closed | AutoModelForSequenceClassification not learning if model is initialized inside a function scope | Hey,
I've made a simple function to initialize a bert model for sequence classification, an optimizer and a scheduler, but my model wasn't learning anything. After a lot of debugging, I found out that if I execute the same piece of code OUT of the function scope, the model learns.
```
def initialize_model(pretrained_bert_name, device, num_labels, epochs, train_dataloader):
"""Initialize the Bert Classifier, the optimizer and the learning rate scheduler.
"""
# Instantiate Bert Classifier
model = AutoModelForSequenceClassification.from_pretrained(
pretrained_bert_name,
num_labels=num_labels)
# Sends model to GPU if inabled
model.to(device)
# Create the optimizer
optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5)
# Total number of training steps
total_steps = len(train_dataloader) * epochs
# Set up the learning rate scheduler
scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=10000,
num_training_steps=total_steps
)
return model, optimizer, scheduler
```
I'm thinking that maybe sending the model to device, or maybe passing its weights to the optimizer may create a different copy when the function returns, idk. Thoughts?
| 02-17-2022 22:21:08 | 02-17-2022 22:21:08 | Hey @JAugusto97, can you provide a full script or a reproducible Colab notebook? Reading the code I don't see why it shouldn't work. <|||||>It might have been global variables interfering somewhere, when I tried to reproduce in a new notebook it worked 🤔
Thanks anyway! |
transformers | 15,704 | closed | TF text classification examples | # What does this PR do?
This PR updates the text classification examples to use the more modern `to_tf_dataset()` method, as opposed to a custom function. A few minor fixes were added as I navigated related files.
Example of running command: `python run_glue.py --model_name_or_path distilbert-base-cased --task_name mnli --do_train --do_eval --output_dir $HOME/test_model` | 02-17-2022 22:07:55 | 02-17-2022 22:07:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,703 | closed | Fine-Tune DETR on custom dataset (less than 250 labels) | I've been following the wonderful [tutorials](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForSegmentation_on_custom_dataset_end_to_end_approach.ipynb) by @NielsRogge to fine-tune DETR on a custom dataset. However, my predicted mask seems to be always empty after training.
Let's say my dataset has 4 labels (in [coco panoptic format](https://cocodataset.org/#format-data)). Looking at the `model.config.id2label` of the pretrained DETR there are `250` different labels.
What would be the correct way to fine-tune the model to less labels (similar to fine-tuning a ViT classifier to 2 classes)?
Here is what I tried:
```python
num_labels = 4
feature_extractor = DetrFeatureExtractor.from_pretrained(
"facebook/detr-resnet-50-panoptic", size=224, format="coco_panoptic"
)
config = DetrConfig.from_pretrained(
"facebook/detr-resnet-50-panoptic",
num_labels=num_labels
)
model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")
model.detr.class_labels_classifier = Linear(
in_features=model.config.hidden_size,
out_features=num_labels + 1, # +1 for "no object" class
)
# remove classifier weights and bias for pretrained model, since it has more classes than we want
state_dict = model.state_dict()
state_dict.pop("detr.class_labels_classifier.weight")
state_dict.pop("detr.class_labels_classifier.bias")
model.load_state_dict(state_dict, strict=False)
model.config = config
```
Even though my loss goes down, it appears that the model doesn't learn. The post-processed predictions using `feature_extractor.post_process_panoptic()` show the following (empty) result:
```
{'png_string': b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x01\xdf\x00\x00\x00\xe0\x08\x02\x00\x00\x00\xcd9\xfb\xc3\x00\x00\x01OIDATx\x9c\xed\xc11\x01\x00\x00\x00\xc2\xa0\xf5Om\x0c\x1f\xa0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xbe\x06\xea|\x00\x01\xd1K+\xbf\x00\x00\x00\x00IEND\xaeB`\x82',
'segments_info': []}
```
For `feature_extractor.post_process_segmentation()` I get the following:
```
{'scores': tensor([], grad_fn=<IndexBackward0>),
'labels': tensor([], dtype=torch.int64),
'masks': tensor([], size=(0, 426, 640), dtype=torch.int64)}
```
Side Question: Would it make sense to lower the number of queries (from default 100) to something like 10 (from [documentation](https://huggingface.co/docs/transformers/v4.16.2/en/model_doc/detr#transformers.DetrFeatureExtractor): "Note that it’s good to have some slack")? | 02-17-2022 17:01:56 | 02-17-2022 17:01:56 | Hi,
Thanks for your interest in DETR. However, we'd like to keep Github issues on the Transformers repository for bugs/feature requests.
For training-related questions, please use our [forum](https://discuss.huggingface.co/).
Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@jhabr having the same issue, were you able to solve it? |
transformers | 15,702 | closed | Fix DETR model deprecation warnings for int div | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #15086 for the DETR model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-17-2022 16:46:40 | 02-17-2022 16:46:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,701 | closed | add VisionTextDualEncoder and CLIP fine-tuning script | # What does this PR do?
This PR adds a fine-tuning script for `VisionTextDualEncoder` and `CLIP` model.
The script is named `run_clip.py` since clip also refers to the training scheme (contrastive-language-image) and not just the model. But feel free to suggest other names.
| 02-17-2022 15:49:53 | 02-17-2022 15:49:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,700 | closed | confusion about past_key_values in GPT2 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.15.0
Models:
- GPT-2: @patrickvonplaten, @LysandreJik
## Information
In fact,there is no error here,or I'm not sure if this is a bug or I still have some misunderstanding about GPT2,but I sincerely wish you can read this,cause I turned to a lot of people with no one gave me right answer.
So,I wanna to question some implementation tricks in GPT2,that is,the design of past_key_values.
I debugged the code, and I suppose I have understood the idea of this amazing trick,I mean, for the auto-regressive model,when generating the new token,the trick is to save the key and value of the previous sequence of tokens,so maybe we don't need to compute the same thing.
But I found something confusing here.At first,the past_key_values should be none,when predicting the next token,the model returns the (key,value) tuple with each block(in gpt2,the number is 12).Here is the code of transformers:
```python
query = self.q_attn(hidden_states)
key, value = self.c_attn(encoder_hidden_states).split(self.split_size, dim=2)
attention_mask = encoder_attention_mask
else:
query, key, value = self.c_attn(hidden_states).split(self.split_size, dim=2)
query = self._split_heads(query, self.num_heads, self.head_dim)
key = self._split_heads(key, self.num_heads, self.head_dim)
value = self._split_heads(value, self.num_heads, self.head_dim)
if layer_past is not None:
past_key, past_value = layer_past
key = torch.cat((past_key, key), dim=-2)
value = torch.cat((past_value, value), dim=-2)
```
With the new token as `hidden_states`,we generate q,k,v for this new token,then we simply concat to the past_key,past_value,which works fine for the first block,but I suppose it not right for the second and the following blocks,cause the output of the first block,which is also the input of the second block,should be changed here,for the left tokens should also attend the new token's key,value,so the softmax of the atten_weights should be changed,which leads to the change of the output of the first block,so in the second and the following block,the (key,value) history is not useable.
I hope I have make my confusion clear,and I'm really confused with this.
I really thank you for your patience to reading whole words above.If this is not wrong but something wrong in my understanding towards GPT2,I really appreciate you for correct things for me.
| 02-17-2022 14:54:04 | 02-17-2022 14:54:04 | I know the next token is only related to the output of the last token , but the previous token's values are supposed to change(I mean),which we just use the past ones,so will cause the wrong value of the output of the last token.<|||||>Hey @VulnDetector,
I'm having a hard time following here exactly what you mean.
If you think the problem is that the past key values of GPT2's first block are incorrectly re-used by GPT2's second block - this is not the case.
You can easily verify this when looking into the `past_key_values`. There you should notice that you have past key values for **each** transformer block.<|||||>Thank you very much @patrickvonplaten
I'm sorry that I didn't express quite clear so that you had such a hard time,but in fact I know what you told me,I debugged the code,and I know,say,if there is 12 blocks in GPT2, the first return of the model contains the tuple of the (key,value) of the 12 blocks each.
Ok, I will take the code below to explain more specifically:
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained('gpt2')
generated = tokenizer.encode("The Manhattan bridge")
context = torch.tensor([generated])
past_key_values = None
for i in range(30):
output = model(context, past_key_values=past_key_values)
past_key_values = output.past_key_values
token = torch.argmax(output.logits[..., -1, :])
context = token.unsqueeze(0)
generated += [token.tolist()]
sequence = tokenizer.decode(generated)
sequence = sequence.split(".")[:-1]
print(sequence)
```
At first,cause `"The Manhattan bridge"` is the begin of the sentence,so my past_key_values here should be None.When I run model for the first time(in the first cycle),I got past_key_values which is (key,value) of 12 blocks,I guess you mean this above.Then I obtain the next word the model predict, put it as context alone,with the tuple of (key,value) returned as the past_key_values,I run the model a second time.
In this time,in the first block,I suppose that(maybe I'm wrong) the first three tokens should use their query(obtained from `past_key_values`) to attend the key of the new generated token,which is different from last time,because last time we don't have the new token,we only have 3 tokens,but now , we have 4 tokens ,so the output of the first three tokens in the first block has changed,so the input of the second block changes ,if we do not use `past_key_values` in the second block,we should recalculate the q,k,v for the first three tokens with the changed first block output ,if so,which is absolutely not equal to the `(key,value)` of the second block generated in the first cycle.
I hope you can forgive me for my ambiguous words,and I wish you can understand what I mean this time.<|||||>I know there is something I ignored.
I thought the GPT2 attend any given tokens at any position.I ignored that the token only attend to those which is on the left side.
I ignored there is still the code:
```python
attn_weights = torch.where(causal_mask, attn_weights, self.masked_bias.to(attn_weights.dtype))
```
Thanks,anyway. @patrickvonplaten |
transformers | 15,699 | closed | fix bug in PT speech-encoder-decoder | # What does this PR do?
Currently in the PT speech-encoder-decoder model, if:
- `encoder_outputs is None` and
- `inputs is not None`
then the encoder is _not_ run in the forward pass, and the `encoder_outputs` remain set to `None`. This throws a `TypeError` when taking the `encoder_hidden_states` - this variable is defined by indexing `encoder_outputs`, which is a `NoneType`.
This PR amends the model to handle this case by running the encoder in a forward pass to yield the `encoder_outputs`. The `encoder_outputs` can then be indexed to give the `encoder_hidden_states`.
| 02-17-2022 14:43:45 | 02-17-2022 14:43:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Think you need to run `make style` once before being able to merge this PR |
transformers | 15,698 | closed | Image classification example fails | Cats and dogs example fails.
transformers.__version__
'4.16.2'
datasets.__version__
'1.18.3'
```
python run_image_classification.py \
--dataset_name cats_vs_dogs \
--output_dir ./cats_vs_dogs_outputs/ \
--remove_unused_columns False \
--do_train \
--do_eval \
--push_to_hub \
--push_to_hub_model_id vit-base-cats-vs-dogs \
--learning_rate 2e-4 \
--num_train_epochs 5 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--logging_strategy steps \
--logging_steps 10 \
--evaluation_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--save_total_limit 3 \
--seed 1337
Downloading: 3.40kB [00:00, 842kB/s]
Downloading: 2.04kB [00:00, 1.74MB/s]
02/17/2022 15:38:33 - WARNING - datasets.builder - Using custom data configuration default
Downloading and preparing dataset cats_vs_dogs/default (download: 786.68 MiB, generated: 7.16 MiB, post-processed: Unknown size, total: 793.84 MiB) to /Users/juliensimon/.cache/huggingface/datasets/cats_vs_dogs/default/0.0.0/de304955b4952383ceadee4ab96ba2b291f67986bb213efb2255963db39d9ed8...
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 825M/825M [00:41<00:00, 19.8MB/s]
Traceback (most recent call last):
File "/Users/juliensimon/Repos/transformers/examples/pytorch/image-classification/run_image_classification.py", line 352, in <module>
main()
File "/Users/juliensimon/Repos/transformers/examples/pytorch/image-classification/run_image_classification.py", line 204, in main
ds = load_dataset(
File "/usr/local/anaconda3/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/anaconda3/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/usr/local/anaconda3/lib/python3.9/site-packages/datasets/builder.py", line 695, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/usr/local/anaconda3/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=7503250, num_examples=23422, dataset_name='cats_vs_dogs'), 'recorded': SplitInfo(name='train', num_bytes=7871070, num_examples=23410, dataset_name='cats_vs_dogs')}]
```
| 02-17-2022 14:42:31 | 02-17-2022 14:42:31 | Hi,
I wasn't able to reproduce your issue. It runs fine for me in Colab. Final log:
```
{'eval_loss': 0.08317019045352936, 'eval_accuracy': 0.9774436090225563, 'eval_runtime': 1.9557, 'eval_samples_per_second': 68.008, 'eval_steps_per_second': 8.693, 'epoch': 5.0}
100% 650/650 [02:58<00:00, 4.25it/s]
[INFO|trainer.py:2137] 2022-02-17 17:56:40,728 >> Saving model checkpoint to ./beans_outputs/checkpoint-650
[INFO|configuration_utils.py:439] 2022-02-17 17:56:40,729 >> Configuration saved in ./beans_outputs/checkpoint-650/config.json
[INFO|modeling_utils.py:1084] 2022-02-17 17:56:41,509 >> Model weights saved in ./beans_outputs/checkpoint-650/pytorch_model.bin
[INFO|feature_extraction_utils.py:352] 2022-02-17 17:56:41,510 >> Feature extractor saved in ./beans_outputs/checkpoint-650/preprocessor_config.json
[INFO|trainer.py:2215] 2022-02-17 17:56:44,280 >> Deleting older checkpoint [beans_outputs/checkpoint-260] due to args.save_total_limit
[INFO|trainer.py:1506] 2022-02-17 17:56:44,315 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
```
Notebook I used is [here](https://colab.research.google.com/drive/1Z2Mv9TbbddydwPblQT-sq4lzFncpmGYU?usp=sharing).<|||||>Update: I see you're using the cats and dogs dataset instead of beans. I'll rerun.
Update v2: This is not a Transformers issue, but a Datasets issue. A minimal reproducer is the following:
```
from datasets import load_dataset
dataset = load_dataset("cats_vs_dogs")
```<|||||>I think this was resolved in the `datasets` upstream!<|||||>Indeed, therefore closing this PR. For now, you have to install datasets from master to have it.<|||||>OK, thank you. |
transformers | 15,697 | closed | Fix docs for decoder_input_ids in BART. | Fix docs for `decoder_input_ids` in BART.
Fix #15691 | 02-17-2022 14:03:28 | 02-17-2022 14:03:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15697). All of your documentation changes will be reflected on that endpoint.<|||||>Done.
Please review again.<|||||>Thanks! However, as you can see, the CI needs all checks passing before we can merge this PR.
For this to pass, you need to run `make fixup` locally and make sure all quality and style checks pass. For more information, please read [this guide](https://huggingface.co/docs/transformers/contributing).
<img width="943" alt="Screenshot 2022-02-18 at 11 28 45" src="https://user-images.githubusercontent.com/48327001/154672136-9cb64b92-d785-4f7e-8b2e-ee9ca8060ebe.png">
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,696 | closed | Fix shape | This PR fixes the docstring for BART models, and those which have copies from `BartEncoder` and `BartDecoder`.
This is needed for #13269.
@patil-suraj @sgugger | 02-17-2022 11:17:07 | 02-17-2022 11:17:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot! |
transformers | 15,695 | closed | IndexError while applying stride for ASR | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.2
- Platform: colab
- Python version: 3.7.12
## Information
Hi :)
`new_stride` from `rescale_stride()` can cause an error at `apply_stride()` when `token_n` and `left` have the same value. I got an error while running ASR through `pipeline() + chunk_length_s=10` with 15 secs of audio.
https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/pipelines/automatic_speech_recognition.py#L69-L90
https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/pipelines/automatic_speech_recognition.py#L92-L105
## To reproduce
Steps to reproduce the behavior:
```python
import numpy as np
from transformers import pipeline
mock_duration = 15
mock_sample_rate = 16_000
mock_audio = np.random.rand(mock_duration*mock_sample_rate,)
pipe = pipeline(model="facebook/wav2vec2-base-960h")
output = pipe(mock_audio, chunk_length_s=10)
```
```python
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
[<ipython-input-9-6be82957f842>](https://localhost:8080/#) in <module>()
9 pipe = pipeline(model="facebook/wav2vec2-base-960h")
10
---> 11 output = pipe(mock_audio, chunk_length_s=10)
5 frames
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in apply_stride(tokens, stride)
98 # next letter, and last letter
99
--> 100 first_letter = tokens[i, left_token]
101 tokens[i, :left_token] = first_letter
102
IndexError: index 83 is out of bounds for dimension 1 with size 83
``` | 02-17-2022 10:54:33 | 02-17-2022 10:54:33 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,694 | closed | Tokenizer offset_mapping problem | ## Task info
Hi,
I'm trying to use the tokenizer model to create sliding windows for my `ner` task.
I have seen that the tokenizers can do this, setting the `return_overflowing_tokens=True` parameter when i call them to tokenize text. I also need the token spans, that can be taken from `BatchEncoding` tokenizer ouput, setting `return_offsets_mapping=True` parameter in the tokenizer call.
## Environment info
I am running my script on Google Colab
```
- `transformers` version: 4.16.2
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
## Information
I am using these three italian tokenizers:
1. `idb-ita/gilberto-uncased-from-camembert`
2. `Musixmatch/umberto-commoncrawl-cased-v1`
3. `Musixmatch/umberto-wikipedia-uncased-v1`
The problem is that for the first two tokenizer, in the token spans `[(start, end),...,(start,end)]` returned, the start index of the first token of a word (except for the first word of text) is the index of the whitespace character.
## To reproduce
Steps to reproduce the behavior:
1. `idb-ita/gilberto-uncased-from-camembert`
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("idb-ita/gilberto-uncased-from-camembert")
text = "L'Italia, ufficialmente Repubblica Italiana, è uno Stato situato nell'Europa centro-meridionale, il cui territorio coincide in gran parte con l'omonima regione geografica." #(source: https://it.wikipedia.org/wiki/Italia)
tokenized_text = tokenizer(text.lower(), stride=10, max_length=20, truncation=True, return_overflowing_tokens=True, return_offsets_mapping=True)
print(tokenized_text["offset_mapping"][0])
print(tokenizer.convert_ids_to_tokens(tokenized_text["input_ids"][0]))
```
Output
```
[(0, 0), (0, 1), (1, 2), (2, 8), (8, 9), (9, 23), (23, 34), (34, 43), (43, 44), (44, 46), (46, 50), (50, 56), (56, 64), (64, 69), (69, 70), (70, 76), (76, 83), (83, 84), (84, 88), (0, 0)]
['<s>', '▁l', "'", 'italia', ',', '▁ufficialmente', '▁repubblica', '▁italiana', ',', '▁è', '▁uno', '▁stato', '▁situato', '▁nell', "'", 'europa', '▁centro', '-', 'meri', '</s>']
```
2. `Musixmatch/umberto-commoncrawl-cased-v1`
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Musixmatch/umberto-commoncrawl-cased-v1")
text = "L'Italia, ufficialmente Repubblica Italiana, è uno Stato situato nell'Europa centro-meridionale, il cui territorio coincide in gran parte con l'omonima regione geografica." #(source: https://it.wikipedia.org/wiki/Italia)
tokenized_text = tokenizer(text, stride=10, max_length=20, truncation=True, return_overflowing_tokens=True, return_offsets_mapping=True)
print(tokenized_text["offset_mapping"][0])
print(tokenizer.convert_ids_to_tokens(tokenized_text["input_ids"][0]))
```
Output
```
[(0, 0), (0, 1), (1, 2), (2, 8), (8, 9), (9, 23), (23, 34), (34, 43), (43, 44), (44, 46), (46, 50), (50, 56), (56, 64), (64, 69), (69, 70), (70, 76), (76, 83), (83, 84), (84, 88), (0, 0)]
['<s>', '▁L', "'", 'Italia', ',', '▁ufficialmente', '▁Repubblica', '▁Italiana', ',', '▁è', '▁uno', '▁Stato', '▁situato', '▁nell', "'", 'Europa', '▁centro', '-', 'meri', '</s>']
```
If we take the sixth token "_ufficialmente", in both cases, its corresponding span is (9,23) and not (10,23).
## Expected behavior
The start index span of the first token of a word should start from the index of the first character of token and not from index of the whitespace character.
The third tokenizer `Musixmatch/umberto-wikipedia-uncased-v1` do it correctly.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Musixmatch/umberto-wikipedia-uncased-v1")
text = "L'Italia, ufficialmente Repubblica Italiana, è uno Stato situato nell'Europa centro-meridionale, il cui territorio coincide in gran parte con l'omonima regione geografica." #(source: https://it.wikipedia.org/wiki/Italia)
tokenized_text = tokenizer(text.lower(), stride=10, max_length=20, truncation=True, return_overflowing_tokens=True, return_offsets_mapping=True)
print(tokenized_text["offset_mapping"][0])
print(tokenizer.convert_ids_to_tokens(tokenized_text["input_ids"][0]))
```
Output
```
[(0, 0), (0, 1), (1, 2), (2, 3), (3, 8), (8, 9), (10, 23), (24, 34), (35, 43), (43, 44), (45, 46), (47, 50), (51, 56), (57, 64), (65, 69), (69, 70), (70, 74), (74, 76), (77, 83), (0, 0)]
['<s>', '▁l', "'", 'i', 'talia', ',', '▁ufficialmente', '▁repubblica', '▁italiana', ',', '▁è', '▁uno', '▁stato', '▁situato', '▁nell', "'", 'euro', 'pa', '▁centro', '</s>']
```
If we take the sixth token "_ufficialmente", its corresponding span is (10,23). | 02-17-2022 10:24:43 | 02-17-2022 10:24:43 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,693 | closed | Incorrect information in "Getting started" regarding API tokens | https://api-inference.huggingface.co/docs/curl/html/quicktour.html#get-your-api-token states
You should see a token api_XXXXXXXX or api_org_XXXXXXX.
But the keys start with hf_
Also https://huggingface.co/docs/hub/security#user-access-tokens describes the read and write tokens without mention of API tokens | 02-17-2022 10:13:01 | 02-17-2022 10:13:01 | The `hf_` tokens will work.
Will update the doc which is out of date (well the old tokenizer `api_` `api_org` still work, we just don't generate new ones anymore).<|||||>Updated ! closing this issue, feel free to reopen. |
transformers | 15,692 | closed | Typo in https://api-inference.huggingface.co/docs/curl/html/detailed_parameters.html#summarization-task | It says "0 mens top_k=1, 100.0 is getting closer to uniform probability."
mens should be means | 02-17-2022 10:07:57 | 02-17-2022 10:07:57 | Thank you for noticing the typo, @ToonTalk! `api-inference` is an internal repository, and I've submitted the fix.<|||||>Thank you for reporting this ! @ToonTalk <|||||>(Fixed in the docs) |
transformers | 15,691 | closed | About `decoder_input_ids` in BART doc | > For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
https://huggingface.co/docs/transformers/v4.16.2/en/model_doc/bart#transformers.BartForConditionalGeneration.forward.decoder_input_ids
This is inappropriate. If `labels` is provided, the model will create this tensor by shifting the `labels`:
https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/models/bart/modeling_bart.py#L1320-L1324 | 02-17-2022 09:34:34 | 02-17-2022 09:34:34 | Indeed, similar to #11357 and duplicate of #14328.
Can you open a PR to fix this? Otherwise I'll do it.<|||||>I have submit a PR to fix it. Can you review it?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,690 | closed | Adding a model, more doc for pushing to the hub | # What does this PR do?
This PR adds a usage example for pushing to the hub, a useful link to our hub page and a couple of comments.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
| 02-17-2022 07:49:56 | 02-17-2022 07:49:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,689 | closed | VisionEncoderDecoder Error during training | Thank you for VisionEncoderModel. It is really nice!!
I am currently trying to develop an Indonesian VisionEncoderModel. However, during the validation step, I am getting the following error. I am wondering whether this is an expected behavior or a bug of the code.
Thank you very much in advance.
transformers.version = '4.15.0'
fename='microsoft/beit-base-patch16-224'
tzname='cahya/roberta-base-indonesian-1.5G'
feature_extractor=ViTFeatureExtractor.from_pretrained(fename)
tokenizer = RobertaTokenizer.from_pretrained(tzname)
processor = TrOCRProcessor(feature_extractor=feature_extractor, tokenizer=tokenizer)
processor.save_pretrained(out_processor_dir)
model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(fename, tzname)
training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
num_train_epochs = 1,
learning_rate = 0.00002,
evaluation_strategy="steps",
lr_scheduler_type = "constant",
gradient_accumulation_steps=4,
per_device_train_batch_size=64,
per_device_eval_batch_size=64,
fp16=True,
output_dir=out_model_dir,
logging_steps=500,
save_steps=2000,
eval_steps=1,
dataloader_num_workers=16
)
trainer = Seq2SeqTrainer(
model=model,
tokenizer=processor.feature_extractor,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=default_data_collator,
)
trainer.train()
**Error message i got:**
Traceback (most recent call last):
File "src/finetune_ot_syndata.py", line 135, in
trainer.train()
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1399, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1521, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 70, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2158, in evaluate
output = eval_loop(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2332, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 167, in prediction_step
generated_tokens = self.model.generate(
File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/generation_utils.py", line 1024, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
File "/opt/conda/lib/python3.8/site-packages/transformers/generation_utils.py", line 486, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(*encoder_args, **encoder_kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'attention_mask'
0%| | 2/132200 [00:11<203:26:04, 5.54s/it] | 02-17-2022 05:10:15 | 02-17-2022 05:10:15 | Thanks for reporting, this has been fixed. See https://github.com/huggingface/transformers/issues/15648#issuecomment-1039452797.<|||||>Thank you very much. I am so lucky that it is already fixed. <|||||>@syoon9 hey can i get your contact to talk about this? |
transformers | 15,688 | closed | Minor fix on README.md | # What does this PR do?
Minor fix on README.md, including
- Think we want to provide links to arXiv paper info. pages, not the direct PDF file links (?).
- Paper title for `RoBERTa`
- Remove ending `)` in `ViTMAE` and `ViLT`.
@LysandreJik @NielsRogge @sgugger | 02-16-2022 20:40:52 | 02-16-2022 20:40:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,687 | closed | Time stamps for CTC models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Proposal of how to add time-stamp to transcribed text. I would very much like to get some feedback from the community on the design before merging this. You can try out this feature directly from master (no need to even use this branch) using the following code:
```python
#!/usr/bin/env python3
from transformers import AutoTokenizer, AutoFeatureExtractor, AutoModelForCTC
from datasets import load_dataset
import datasets
import torch
# import customized tokenizer of PR: https://github.com/huggingface/transformers/pull/15687
tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/wav2vec2-base-960h-time-stamps", trust_remote_code=True)
model = AutoModelForCTC.from_pretrained("facebook/wav2vec2-base-960h")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h")
ds = load_dataset("common_voice", "en", split="train", streaming=True)
ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
ds_iter = iter(ds)
sample = next(ds_iter)
# compare to filename of dataset viewer on https://huggingface.co/datasets/common_voice/viewer/en/train
print("Filename", sample["audio"]["path"])
input_values = feature_extractor(sample["audio"]["array"], return_tensors="pt").input_values
logits = model(input_values).logits
pred_ids = torch.argmax(logits, axis=-1)
outputs = tokenizer.batch_decode(pred_ids, output_time_stamps=True, stride=320, sampling_rate=feature_extractor.sampling_rate)
print("Word time stamps", outputs[0]["word_time_stamps"])
print("Token time stamps", outputs[0]["token_time_stamps"])
```
This example uses the first example of common voice which can be listened to here: https://huggingface.co/datasets/common_voice/viewer/en/train . Feel free to use other examples to see if the time stamps match.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-16-2022 17:41:13 | 02-16-2022 17:41:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'm still unsure whether we should output the timestamps of the words or the tokens or both. In my opinion in 99% of the use cases one is interested in getting the time stamps of the words or even sentences, but not necessarily of the tokens. However some languages don't really have words, but are based on characters only. So should we maybe just output both word and token time stamps?
cc @anton-l @Narsil <|||||>You can try out this feature with the following lines of code:
```python
#!/usr/bin/env python3
from transformers import AutoTokenizer, AutoFeatureExtractor, AutoModelForCTC
from datasets import load_dataset
import datasets
import torch
# import customized tokenizer of PR: https://github.com/huggingface/transformers/pull/15687
tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/wav2vec2-base-960h-time-stamps", trust_remote_code=True)
model = AutoModelForCTC.from_pretrained("facebook/wav2vec2-base-960h")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h")
ds = load_dataset("common_voice", "en", split="train", streaming=True)
ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
ds_iter = iter(ds)
sample = next(ds_iter)
# compare to filename of dataset viewer on https://huggingface.co/datasets/common_voice/viewer/en/train
print("Filename", sample["audio"]["path"])
input_values = feature_extractor(sample["audio"]["array"], return_tensors="pt").input_values
logits = model(input_values).logits
pred_ids = torch.argmax(logits, axis=-1)
outputs = tokenizer.batch_decode(pred_ids, output_time_stamps=True, stride=320, sampling_rate=feature_extractor.sampling_rate)
print("Word time stamps", outputs[0]["word_time_stamps"])
print("Token time stamps", outputs[0]["token_time_stamps"])
```
without even having to use this branch (thanks to @sgugger new "remote_code" feature).<|||||>@Nithin-Holla - I'd be very interested in hearing your feedback here as well in case this feature would be of interest to you<|||||>I've experimented a bit with the code @patrickvonplaten provided. I think this way of outputting both word and token-level timestamps is concise, readable and complete. I think outputting both is preferable to be able to be used for many languages, as you mentioned. But even token-level timestamps can be useful for many applications in case you want more fine-grained control for let's say karaoke or something. But yes, more often than not people will just use the word-level timestamps. In terms of how it is integrated in the huggingface architecture, to me it seems easy to understand/find, but I'm not an expert.
For my application the way it is implemented would require minimal changes, so I'd be very happy merging this. <|||||>This is awesome! We've been playing around with the idea as well. We tested several approaches, like adapting an official [`torchaudio` notebook to 🤗](https://colab.research.google.com/github/pytorch/audio/blob/gh-pages/main/_downloads/160356f33d521341c47ec6b1406a3c2e/forced_alignment_tutorial.ipynb), or the [`ctc-segmentation` library](https://github.com/lumaku/ctc-segmentation). In the end, a simple community script in https://github.com/huggingface/transformers/issues/11307#issuecomment-867648870 provided us with both segment and word level timestamps. This is a great and timely addition!
Maybe it's not a bad idea let the user decide the granularity of the annotation since there are many use-cases.<|||||>> Regarding char vs word time stamps: The traditional forced alignment approaches usually require a second (external) tokenizer to split the predictions into words/lines/sentences and output the time spans for those specific units. How do you feel about including that option here, to support languages without explicit word boundaries?
>
> Alternatively, we could allow the user to supply the tokens directly to `_retrieve_tokens_with_time_stamps()` and support both external tokenization and forced alignment with ground-truth transcriptions!
I don't fully understand what you mean here. However leaving the possibility for the user to provide the tokens sounds like a sensible idea<|||||>@patrickvonplaten Thanks for working on this! I tried the example code and the output matches the requirements for transcription and subtitling, which is the availability of word-level timestamps. I have a couple of questions:
- Does this also work in the presence of an n-gram language model too?
- What does the `stride` parameter (320 in the example code) refer to here?<|||||>> Regarding char vs word time stamps: The traditional forced alignment approaches usually require a second (external) tokenizer to split the predictions into words/lines/sentences and output the time spans for those specific units. How do you feel about including that option here, to support languages without explicit word boundaries?
This pushes me to think that pure `char` timestamps are better.<|||||>@patrickvonplaten I was planning on creating a PR this weekend on this timestamp issue (because I already did it [here](https://github.com/jonatasgrosman/huggingsound)). But now I noticed that you have already taken the lead in it :)
I followed a similar path you are taking, but I focused only on returning character-based timestamps. The only problem with this approach is that most CTC beam search decoding tools return only the timestamps of words, such as the pyctcdecode.<|||||>@Narsil - merged it now. Do you think we could now somehow leverage this feature for the pipeline as well so that we can chunk long audio inputs and give them time-stamps? <|||||>@Narsil if there is progress on adding the timestamps to the pipeline, is there any place I could watch that? I might also be able to do a pull request.
Also thanks for the great work on the feature! Works like a charm 🤗
<|||||>Also am I correct in saying the current implementation is incompatible with the Wav2VecProccessorWithLM?<|||||>The PR is here: https://github.com/huggingface/transformers/pull/15792
Yes it's pure CTC for now, we need to figure out how to get offsets for CTC+LM in order to get them<|||||>Hi @patrickvonplaten + collaborators, thanks for this feature. I'd like to align the _start_times_ of word/syllables with their corresponding phoneme start times, but I'm surprised to see timestamps don't seem to line up between the outputs of facebook/hubert-large-ls960-ft and facebook/wav2vec2-lv-60-espeak-cv-ft ([notebook](https://colab.research.google.com/drive/1d8pjEuMWRVAphTZAopMAaUPVc7EqFR_h?usp=sharing)).
What am I missing?<|||||>Hey @i-am-neo,
Sorry could you try to post a code snippet showing a minimum reproducible bug. I don't clearly see from your notebook where there is an error, especially since you are using different models (one being a letter CTC, the other being a Phoneme CTC).<|||||>Thanks for your response @patrickvonplaten. Is it not reasonable to expect timestamps to correspond for the same utterance, even if piped through different models?
```
# time(secs) ->
0.42 0.78 0.96 1.16
| | | |
because you are sleeping
0.42 0.84 1.08 1.24
| | | |
bɪkʌz juː ɚ sliːpɪŋ
```<|||||>Hey @i-am-neo,
The will never be exactly the same ;-) That's due to the nature of how CTC works. See this blog post: https://distill.pub/2017/ctc/<|||||>Hi @patrickvonplaten, I can understand that the time _steps_ may not correspond, but converted to _timestamps_, it doesn't seem intuitive that they don't line up, more or less?
If I were to take cut the utterance audio, from the timestamps output yielded by either model, I should end up with the sound that corresponds to its "word," no?<|||||>Hi @i-am-neo, your results seem plausible for me too, 'cause, as @patrickvonplaten said, that timestamp difference is due to the nature of the CTC. During the model's output decoding, the transcription is built based on the tokens (a token can be a letter, phoneme, etc.) with the highest probability for each timestep. Different models can present different confidence outputs on a given timestep for the same audio, causing this discrepancy in the transcriptions timestamps. So the discrepancy you found only shows that the models are different from each other. Now is up to you to find out which one is the most accurate and find a replacement for the other one if you need :)
<|||||>Thanks for your response, @jonatasgrosman. I'm interested in what you find.<|||||>> @patrickvonplaten I was planning on creating a PR this weekend on this timestamp issue (because I already did it [here](https://github.com/jonatasgrosman/huggingsound)). But I was happy now that I noticed that you have already taken the lead in it :)
>
> I followed a similar path that you are taking, but I focused only on returning the character-based timestamps. The only problem with this approach is that most CTC beam search decoding tools return only the timestamps of words, such as the pyctcdecode does.
Hi @jonatasgrosman @patrickvonplaten , I have the same question on using char-level kenlm model in Chinese. Actually I already found it can be solved in `pyctcdecode.decoder.BeamSearchDecoderCTC._decode_logits()`, the core difference is `next_word` and `word_part`(just make `next_word=char` and `word_part=""`). But I temporarily have no a good idea to make it better to process both word-level and char-level in the huggingface architecture.
<|||||>Hey @qinyuenlp,
Could you maybe open a new issue with your question? I don't understand a 100% what the question here is exactly. Are you looking for time-stamps using KenLM models? |
transformers | 15,686 | closed | Fix Funnel configuration doc | # What does this PR do?
The part `standard deviation` should be `upper bound` in
https://github.com/huggingface/transformers/blob/cdc51ffd27f8f5a3151da161ae2b5dbb410d2803/src/transformers/models/funnel/configuration_funnel.py#L79-L80
as it is used for `nn.init.uniform_`:
https://github.com/huggingface/transformers/blob/cdc51ffd27f8f5a3151da161ae2b5dbb410d2803/src/transformers/models/funnel/modeling_funnel.py#L774-L778
@sgugger | 02-16-2022 16:27:06 | 02-16-2022 16:27:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can you run `make style` on your brnach to fix the quality issue?<|||||>> Can you run `make style` on your brnach to fix the quality issue?
Sure - I forget doing so quite often (will try to be more careful!).<|||||>Thanks a lot! |
transformers | 15,685 | closed | Fix Funnel configuration doc | # What does this PR do?
The part `standard deviation` should be `upper bound` in
https://github.com/huggingface/transformers/blob/cdc51ffd27f8f5a3151da161ae2b5dbb410d2803/src/transformers/models/funnel/configuration_funnel.py#L79-L80
as it is used for `nn.init.uniform_`:
https://github.com/huggingface/transformers/blob/cdc51ffd27f8f5a3151da161ae2b5dbb410d2803/src/transformers/models/funnel/modeling_funnel.py#L774-L778
@sgugger | 02-16-2022 16:07:04 | 02-16-2022 16:07:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Sorry, messed up with another PR. I will open another one |
transformers | 15,684 | closed | Add initializer_std to TFFunnelModelTester with a default value 0.02 | # What does this PR do?
This PR sets `initializer_std=0.02` in `TFFunnelModelTester`, so we can use `1e-5` as the threshold in PT/TF equivalence test.
(so the inconsistencies are less likely to be undetected)
TF: @gante @Rocketknight1
## Details
The `test_pt_tf_model_equivalence` in `test_modeling_tf_common.py` has:
- create Pytorch/TensorFlow models using a config (defined in each TF test script)
- load the Pytorch model to TensorFlow model
- load the TensorFlow model (already changed in the previous step) to the Pytorch model
See
https://github.com/huggingface/transformers/blob/cdc51ffd27f8f5a3151da161ae2b5dbb410d2803/tests/test_modeling_tf_common.py#L359-L366
For `Funnel` model:
- In `test_modeling_tf_funnel.py`, we don't use `initializer_std`.
- In `config_funnel.py`, we have [initializer_std=None](https://github.com/huggingface/transformers/blob/cdc51ffd27f8f5a3151da161ae2b5dbb410d2803/src/transformers/models/funnel/configuration_funnel.py#L124)
- In `modeling_funnel.py`, we have [std = 1.0 if self.config.initializer_std is None](https://github.com/huggingface/transformers/blob/cdc51ffd27f8f5a3151da161ae2b5dbb410d2803/src/transformers/models/funnel/modeling_funnel.py#L780)
Therefore, the Pytorch Funnel model created for the testing uses `std=1.0` to create `FunnelEmbeddings` (and the weights are loaded to TF model). This has the following effect for `TFFunnelForMaskedLM`:
- while all the hidden states have a PT/TF difference in the range `1e-7 ~ 2e-6`, the `logits` will have a larger difference (say `2e-5`) with a higher probability - due to the larger magnitude of embedding weights.
The (extended) PT/TF equivalence test used to search for inconsistency uses `1e-5` as threshold. So far, the equivalence tests failed with this threshold have proven to be due to some PT/TF inconsistency (i.e. not due to randomness). And after some fixes, they all have differences < `1e-5`.
In order to continue to use `1e-5` as threshold (so the inconsistencies are less likely to be undetected), this PR sets `initializer_std=0.02` in `TFFunnelModelTester`.
## Remark
The weight initialization logic for `TFFunnelModel` diverges from the `FunnelModel`, and would be good to fix it in another PR. | 02-16-2022 15:40:37 | 02-16-2022 15:40:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for the PR, @ydshieh!
I was looking at the code, and `initializer_std` is only used in the PT side of Funnel (codebase search [here](https://github.com/huggingface/transformers/search?q=initializer_std)). Do you know how setting this variable actually changes the behavior of the test? I can't think of any way it influences it 🤔 <|||||>> Thank you for the PR, @ydshieh!
>
> I was looking at the code, and `initializer_std` is only used in the PT side of Funnel (codebase search [here](https://github.com/huggingface/transformers/search?q=initializer_std)). Do you know how setting this variable actually changes the behavior of the test? I can't think of any way it influences it 🤔
Hi, @gante
In order to see the reason, we need to look this part in `test_pt_tf_model_equivalence` inside `test_modeling_tf_common.py`:
https://github.com/huggingface/transformers/blob/cdc51ffd27f8f5a3151da161ae2b5dbb410d2803/tests/test_modeling_tf_common.py#L359-L366
- The line `pt_model = pt_model_class(config)` will be affected by setting or not `initializer_std`.
- `tf_model = transformers.load_pytorch_model_in_tf2_model( ...` means `tf_model` is affected by `pt_model` above
(And as you already see, the weight initialization logic for `TFFunnelModel` diverges from the `FunnelModel` (at least, `initializer_std` is not used for `TFFunnelModel` now). But this is not the concern of this PR, and can be fixed in another PR.)
<|||||>Oh, I see -- the TF config is passed to PT in that particular test, therefore making use of the variable. Thank you for elaborating :)
In that case, do you know why setting it to `0.02` (and not `1.0`, the default) does the trick? Could it be because TF's default initializer is different?<|||||>> Oh, I see -- the TF config is passed to PT in that particular test, therefore making use of the variable. Thank you for elaborating :)
>
> In that case, do you know why setting it to `0.02` does the trick? Could it be because TF's default initializer is different?
For the test `test_pt_tf_model_equivalence` called by TF model test scripts:
- we use the config prepared in TF test scripts
- the config is used to initialize a PT and a TF model
- then, we load PT model into TF model
So, TF initializer plays no role in this test. It is the PT initializer used to create/init the weights, then loaded to TF model.
Since `FunnelModel` uses `initializer_std`, and its default value in `FunnelConfig` is `None` (so `FunnelModel` will use `1.0` for it, see [here](https://github.com/huggingface/transformers/blob/cdc51ffd27f8f5a3151da161ae2b5dbb410d2803/src/transformers/models/funnel/modeling_funnel.py#L780)), it will generate weights for word embeddings with a larger magnitude.
This PR is **NOT** to fix a bug though: Just to use a smaller `initializer_std` to get `smaller word embedding weights` --> in order to keep low error threshold `1e-5` for the PT/TF equivalence test.
<|||||>More details:
The larger word embeddings will make the difference of `logits` (which uses the shared word embedding, I think) between PT & TF FunnelForMaskedLM larger, more than `1e-5`, and fail the (extended/strict) PT/TF equivalence<|||||>@ydshieh thank you for the clarification <3 Are you okay to merge as it is?<|||||>Yes, you can go ahead :-) Thanks for reviewing. (This one is tricky: took me quite some time to fully understand the cause - I asked myself all your questions during the process too. )<|||||>@ydshieh I've found this: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L2048
I believe it explains the 0.02 :)<|||||>> @ydshieh I've found this: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L2048
>
> I believe it explains the 0.02 :)
Yes, that's the usual value for std in transformers models!
But I **think** the default is 0.02 here comes from the fact that the configurations usually have `initializer_range=0.02`, see
https://github.com/huggingface/transformers/blob/60ba48205e4ec070780e8cb8d461421b77432bad/src/transformers/models/bert/configuration_bert.py#L89
(and since BERT is the 1st model in `transformers`, it is continuously used - unless a particular architecture uses different value in the paper)
--
However, for this PR, I choose `0.02` just because this is the common value in the library.
- any value < 0.02, say `1e-3` will make the test pass .
- but we can't set a too small value, otherwise the PT/TF difference (`FunnelForMaskedLM` here) will definitely be small (break the purpose of testing)
|
transformers | 15,683 | closed | [Wav2Vec2ProcessorWithLM] Fix auto processor with lm | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixe loading of Wav2Vec2ProcessorWithLM from AutoProcessor.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-16-2022 15:15:00 | 02-16-2022 15:15:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,682 | closed | Maskformer | # What does this PR do?
This WIP PR adds [MaskFormer](https://arxiv.org/abs/2107.06278) a new model for any segmentation task. This model ranks [on the top 10 of almost every task](https://paperswithcode.com/paper/per-pixel-classification-is-not-all-you-need)
A total of 8 pre-trained checkpoints will be available, which are the checkpoints discussed in the official MaskFormer paper. They are not yet available on the hub but tested locally. The weights are all pairs of combinations between the 4 Swin Transformer variants (tiny, small, base and large) and two datasets ([ade20k-150](https://groups.csail.mit.edu/vision/datasets/ADE20K/) and [coco-panoptic](https://cocodataset.org/#home))
TODOs
- [x] converting script should be self-contained
- currently, the job of downloading the weights and configuration files is outsourced to the end-user
- [x] feature extractor
- all the parameters needed are there, but the class is missing
- the resizing needed is tricky, originally [ResizeShortestEdge](https://detectron2.readthedocs.io/en/latest/modules/data_transforms.html#detectron2.data.transforms.ResizeShortestEdge) from detectron is used
- all the post processing should be inside it
- [x] backbone
- recently we added [Swin Transformer](https://github.com/huggingface/transformers/pull/15085), the backbone should depend on that implementation
- ported inside maskformer with the required changes
- [x] loss
- originally, the loss is computed by passing a list of dictionaries representing targets. However, this approach is subefficient since different targets may have different sizes making it hard to process the batch in a single go.
- [x] padding
- padding is handled in the forward pass using `NestedTensor`, this is a well know class used in a lot of implementations. This should be handled by the `FeatureExtractor` following [Niels implementation](https://github.com/huggingface/transformers/blob/c15bb3fe19b0b6c69a727812cdd3cd5597014667/src/transformers/models/detr/feature_extraction_detr.py#L633)
- [x] auxiliary loss is not yet implemented
- [x] output_hidden_states
- [x] doc
- [x] tests:
- [x] FeatureExtractor
- [x] MaskFormer
Currently, the model can be used as follows
```python
import torch
from transformers import (
MaskFormerModel,
MaskFormerForInstanceSegmentation,
MaskFormerConfig,
MaskFormerFeatureExtractor,
)
import numpy as np
feature_extractor = MaskFormerFeatureExtractor(do_resize=True)
inputs = feature_extractor(
[np.zeros((3, 400, 1200)), np.zeros((3, 750, 384))],
return_tensors="pt",
pad_and_return_pixel_mask=True,
)
config = MaskFormerConfig()
mask_former = MaskFormerModel(config=config)
out = mask_former(**inputs)
# out contains the hidden states of each submodule
mask_former = MaskFormerForInstanceSegmentation(config=config)
out = mask_former(**inputs)
# out contains the logits
seg = feature_extractor.post_process_segmentation(out)
# get the instance panoptic mask + segments
seg = feature_extractor.post_process_panoptic_segmentation(out)
```
| 02-16-2022 12:51:21 | 02-16-2022 12:51:21 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15682). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for the comments. Changed all instances of `detr_**` to `decoder_**` in the config + minor fixes and typos. Currently uploading the new weights and testing if everything is correct |
transformers | 15,681 | closed | Fix prepare_for_model error inconsistency | # What does this PR do?
The model default parameters in the `tokenization_utils_base.prepare_for_model` function are fetched after the parameter sanity check. This causes the sanity check to not recognize wrong parameter combinations when using model defaults.
This PR swaps those two.
Fixes #15679
- tokenizers: @n1t0, @LysandreJik
## Code to test:
Prepare:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
sentences = ['Sentence number one.',
'Sentence number two is longer to trigger padding.']
token_ids = tokenizer.batch_encode_plus(
sentences,
add_special_tokens=True,
truncation=False,
padding=False)
token_ids = token_ids['input_ids']
```
Test:
```
try:
tokenizer.prepare_for_model(
token_ids,
add_special_tokens=False,
padding='longest',
truncation=False,
return_token_type_ids=None) # Uses the model default
except Exception as e:
e1 = e
try:
tokenizer.prepare_for_model(
token_ids,
add_special_tokens=False,
padding='longest',
truncation=False,
return_token_type_ids=True) # Explicitly set to true
except Exception as e:
e2 = e
assert type(e1) is type(e2) and e1.args == e2.args
```
The error code of Test_01 and Test_02 should be the same. | 02-16-2022 12:49:42 | 02-16-2022 12:49:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15681). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,680 | closed | 🔥 Remove build_doc_test github action | # What does this PR do?
The build_dev_documentation already builds the doc on every PR. I don't think that build_doc_test is needed
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-16-2022 12:37:26 | 02-16-2022 12:37:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,679 | closed | Tokenizer prepare_for_model Error inconsistency | ## Environment info
- `transformers` version: 4.16.2
- Platform: Windows 10
- Python version: 3.9.10
[GitBlame](https://github.com/huggingface/transformers/blame/b87c044c79a408c0a1e7f7b046b5b4ce999c2d0e/src/transformers/tokenization_utils_base.py#L2974-L2978):
- @LysandreJik, @thomwolf
## Information
Hello there,
I ran into a problem when using the `PreTrainedTokenizerBase.prepare_for_model` function.
The function doesen't throw the same error message when the setting the parameter `return_token_type_ids` to either `None` or `True`.
The value `None` makes the function fetch the default model parameter which is `True` (for the example below).
So I expect the function to throw the same message.
Within the functions [definition](https://github.com/huggingface/transformers/blob/b87c044c79a408c0a1e7f7b046b5b4ce999c2d0e/src/transformers/tokenization_utils_base.py#L2956-L2978) the parameter sanity check is performed before getting the model default parameters.
I propose to swap the fetching of the model defaults with the sanity check.
## To reproduce
Minimal code example:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
sentences = ['Sentence number one.',
'Sentence number two is longer to trigger padding.']
token_ids = tokenizer.batch_encode_plus(
sentences,
add_special_tokens=True,
truncation=False,
padding=False)
token_ids = token_ids['input_ids']
# I do stuff with the IDs here. For e.g. applying a custom truncate function.
```
With `return_token_type_ids` set to `None`.
```
tokenizer.prepare_for_model(
token_ids,
add_special_tokens=False,
padding='longest',
truncation=False,
return_attention_mask=None) # Use tokenizers default
```
```
Traceback (most recent call last):
File "C:\Users\rs\miniconda3\envs\tf\lib\site-packages\IPython\core\interactiveshell.py", line 3457, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-7db0043bfd18>", line 1, in <module>
sia.predict(text_neg)
File "C:\Users\rs\Documents\repos\Python\test\sentiment_analysis\transformers_text_classification.py", line 137, in predict
input_batch = self._tokenizer.prepare_for_model(
File "C:\Users\rs\miniconda3\envs\tf\lib\site-packages\transformers\tokenization_utils_base.py", line 3003, in prepare_for_model
encoded_inputs = self.pad(
File "C:\Users\rs\miniconda3\envs\tf\lib\site-packages\transformers\tokenization_utils_base.py", line 2829, in pad
outputs = self._pad(
File "C:\Users\rs\miniconda3\envs\tf\lib\site-packages\transformers\tokenization_utils_base.py", line 3197, in _pad
encoded_inputs["token_type_ids"] + [self.pad_token_type_id] * difference
TypeError: unsupported operand type(s) for +: 'int' and 'list'
```
## Expected behavior
With `return_token_type_ids` set to `True`.
```
tokenizer.prepare_for_model(
token_ids,
add_special_tokens=False,
padding='longest',
truncation=False,
return_token_type_ids=True) # Explicitly set to true
```
```
Traceback (most recent call last):
File "C:\Users\rs\miniconda3\envs\tf\lib\site-packages\IPython\core\interactiveshell.py", line 3457, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-7db0043bfd18>", line 1, in <module>
sia.predict(text_neg)
File "C:\Users\rs\Documents\repos\Python\test\sentiment_analysis\transformers_text_classification.py", line 137, in predict
input_batch = self._tokenizer.prepare_for_model(
File "C:\Users\rs\miniconda3\envs\tf\lib\site-packages\transformers\tokenization_utils_base.py", line 2937, in prepare_for_model
raise ValueError(
ValueError: Asking to return token_type_ids while setting add_special_tokens to False results in an undefined behavior. Please set add_special_tokens to True or set return_token_type_ids to None.
``` | 02-16-2022 12:11:41 | 02-16-2022 12:11:41 | cc @SaulLu
We should also probably put this method as private/internal with a deprecation cycle as I don't think it's tested outside of the `__call__`/`encode`/`encode_plus` methods<|||||>Thanks for the issue @r-stiller!
Indeed, I don't think we test that this method returns certain errors - we have two tests dedicated to it `test_prepare_for_model` and `test_compare_prepare_for_model`. But I agree that having this method public make things harder to maintain.
@r-stiller , could you share with us why you wanted to use `prepare_for_model` on a list of ids? That would be very useful to know how its currently used. :smile: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@SaulLu sorry for my late reply...
I wanted to apply a custom truncate function which removes tokens from the middle of oversized senteces instead of cutting them on the left/right side.
(1/4 from the beginning and 3/4 from the end of the sentence)
Therefore I obtained the token IDs with `batch_encode_plus` to get their count.
If their count exceeded the given max. sequence length I truncated them.
After that I wanted to use `prepare_for_model` to get my batch ready for the model (padding, attention mask and type Ids).
Or in short:
`batch_encode_plus` -> `custom truncate` -> `prepare_for_model`.
Since this lead to the error above I changed my code to the following:
`batch_encode_plus` -> `custom truncate` -> `batch_decode` -> `batch_encode_plus`
|
transformers | 15,678 | closed | HTML dev docs | # What does this PR do?
This PR generates the *whole* docs inside the CI. So when the docs fail building, the CI is red instead of green.
You can see the generated html by `build_dev_documentation.yml` here: https://github.com/huggingface/doc-build-dev/tree/main/transformers/pr_15678
You can see the generated html by `build_documentation.yml` here: https://github.com/huggingface/doc-build/tree/main/transformers/doc-build-test
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Changes to make
This is a multi-repo PR, for now relying on the `kit` branch of `doc-builder`: https://github.com/huggingface/doc-builder/pull/94. When everything's ready, the `master` branch should be used instead.
Remove `@kit`:
```
pip install git+https://github.com/huggingface/doc-builder@kit -U
```
```
ref: "kit"
```
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-16-2022 10:32:01 | 02-16-2022 10:32:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,677 | closed | one of the variables needed for gradient computation has been modified by an inplace operation | i use the model of rbt3 to run a pretrain model
self.model = AutoModel.from_pretrained("/workspace/wanglei/data_set/rbt3")
self.model.train()
nodes = self.model(**rnn_nodes_input_gpu).pooler_output
but got a error
"
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [287, 768]], which is output 0 of TanhBackward, is at version 2; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
"
| 02-16-2022 09:49:21 | 02-16-2022 09:49:21 | Hi could please a few more details, like what model are you using (bert, roberta etc) and code snippet so we can reproduce this ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,676 | closed | Gelu10 | Introduce `GeLU10(x)` activation which behaves exactly like a raw GeLU but keeps output values within [-10, 10] range.
This is especially useful when quantizing neural network using GeLU activations, because it allows to map 2 negatives values within the GeLU spectrum.
Please see https://arxiv.org/abs/2004.09602 (Appendices D "Novel activation functions"):
> GELU has an output range of [−0.1700, ∞]. This
poses a challenge for uniform quantization as it should represent both small negative values and large positive values [...] However, if we restrict the range to [-10,10] then two negative values can be represented.

| 02-16-2022 09:36:19 | 02-16-2022 09:36:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can this be used right now with some existing pretrained models?<|||||>@LysandreJik, yes it can be used anywhere there is a `gelu` call.
The purpose of this operator is mainly to be used when doing the calibration phase for quantization, to avoid having to map the whole GeLU's range.
Other than this, not so much interest.
At first, I was looking at putting this into optimum, but `transformers` has all the logic to handle this quite easily, wdyt?<|||||>It will need to be rebased/adapted to include the class the new activation function.
Plus the docstrings should be in our format ;-)<|||||>Sure, will go through it over the weekend 👌🏻 <|||||>@sgugger If you can validate regarding the doc? 🙏🏻 |
transformers | 15,675 | closed | How can I use "accelerate launch" command to run training job on Multi-GPU? | How can I use "accelerate launch" command to train "run_wav2vec2_pretraining_no_trainer.py" on Multi-GPU?
I found that when i use " accelerate launch run_wav2vec2_pretraining_no_trainer ..." , it would run training on a single GPU.
I also tried "accelerate launch --num_machines 4" command , but still not work
@patrickvonplaten, @anton-l | 02-16-2022 09:30:53 | 02-16-2022 09:30:53 | Before starting the training, I recommend to run `accelerate config` which helps you setting up the multi-gpu training. When running the commend you are asked how many GPUs you want to use and your answer is then stored in a config that will automatically be used by `accelerate launch`. Could you give this a try?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,674 | closed | model.generate() using a user specified keyword argument | I made a simple Sea2Seq model based on BartForConditionalGeneration.
My model just takes a additional keyward argument in its forward() method.

Everything is just same as the original forward() method(which is from BartForConditionalGeneration) but "responder_emb" is added.
I'm trying to generate a sequence using model.generate()
like this.


but I run into this error (TypeError: forward() got an unexpected keyword argument).
I need to pass "responder_emb" to the model's forward() method but generate() method can't deal with the argument.
How can I solve this error?
| 02-16-2022 07:38:01 | 02-16-2022 07:38:01 | cc @patrickvonplaten<|||||>Hey @JH-lee95,
Please note that this question is not related to the original Transformers code and not a bug so in the future it would be amazing if you could use the forum for such questions instead: https://discuss.huggingface.co/ .
To answer your question, to make your use case work, you'll have to adapt the following function: https://github.com/huggingface/transformers/blob/86119c115496ca773b82f8a1c8cbf2d4e44149fa/src/transformers/models/bart/modeling_bart.py#L1369 in BartForConditionalGeneration as well as making sure that your param is passed correctly through generate to the model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,673 | closed | DebertaForMaskedLM cannot load the parameters in the MLM head | Hello,
When I tried to use Deberta for MLM and I didn't succeed. It seems that it is a known issue. The checkpoint provided by microsoft/deberta-base and other similar ones don't have the pre-trained weights of the Masked LM head.
https://github.com/huggingface/transformers/issues/15216
Thank you! | 02-16-2022 07:14:46 | 02-16-2022 07:14:46 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,672 | closed | Unable to generate chunks (If length is greater than 512 in bert), we can use to split into chunks | I'm working Question & Answering hugging face pipeline, my sentence length is 3535, bert only takes 512 length, so i'm trying to divide into chunks and work on it.
In the code, i'm working on question and answering model from hugging face, if the length of the sentence is greater than 512, bert won't take it and we've to add extra argument Truncation=True, which doesn't consider some content from the sentence, which is a drawback. That's why i'm splitting the sentence into chunks and adding back.
Below is the code
```
from transformers import pipeline
def load_qa_model():
model = pipeline(task='question-answering', model=model, tokenizer=tokenizer)
return model
def generate_chunks(inp_str):
max_chunk = 500
inp_str = inp_str.replace('.', '.<eos>')
inp_str = inp_str.replace('?', '?<eos>')
inp_str = inp_str.replace('!', '!<eos>')
sentences = inp_str.split('<eos>')
current_chunk = 0
chunks = []
for sentence in sentences:
if len(chunks) == current_chunk + 1:
if len(chunks[current_chunk]) + len(sentence.split(' ')) <= max_chunk:
chunks[current_chunk].extend(sentence.split(' '))
else:
current_chunk += 1
chunks.append(sentence.split(' '))
else:
chunks.append(sentence.split(' '))
for chunk_id in range(len(chunks)):
chunks[chunk_id] = ' '.join(chunks[chunk_id])
return chunks
sentence = "" # Consider random sentence where the length is greater than 512
vect = generate_chunks(sentence)
qa = load_qa_model()
question = "Who released this article?"
answers = qa(question=question, context=vect)
print(answers['answer'])
```
Below is the link for the sentence (Article)
https://drive.google.com/file/d/1m8rYuOaFSW7bxqm_nYo_8Ryi9RUCY3Tq/view?usp=sharing
The output is below
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_11012\2006680085.py in <module>
1 qa = load_qa_model()
2 question = "Who released this article?"
----> 3 answers = qa(question=question, context=vect)
4 print(answers['answer'])
c:\users\nithi\miniconda3\lib\site-packages\transformers\pipelines\question_answering.py in __call__(self, *args, **kwargs)
248
249 # Convert inputs to features
--> 250 examples = self._args_parser(*args, **kwargs)
251 if len(examples) == 1:
252 return super().__call__(examples[0], **kwargs)
c:\users\nithi\miniconda3\lib\site-packages\transformers\pipelines\question_answering.py in __call__(self, *args, **kwargs)
80 inputs = [{"question": kwargs["question"], "context": kwargs["context"]}]
81 else:
---> 82 raise ValueError("Arguments can't be understood")
83 else:
84 raise ValueError(f"Unknown arguments {kwargs}")
ValueError: Arguments can't be understood
```
**How to overcome this issue?** | 02-16-2022 06:52:40 | 02-16-2022 06:52:40 | Hey! I recommend taking a look at how it is done in our examples: https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa_no_trainer.py#L388
Please note that the question answering pipeline does accept longer sequences, see the list of arguments here: https://github.com/huggingface/transformers/blob/f65fe3663a6c62975a9c04654703252644c9a652/src/transformers/pipelines/question_answering.py#L209-L237
Using the argument `doc_stride` in combination with `max_seq_len` and `max_question_len` should help you out.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>do you guys know if BERT handles 512 word tokens or is it characters 512 |
transformers | 15,671 | closed | Fix vit test | Fixes the link to the tiny vit test. | 02-15-2022 21:03:51 | 02-15-2022 21:03:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,670 | closed | Fix model equivalence tests | Fix model equivalence tests on GPU by putting the tensors on the appropriate device. | 02-15-2022 20:51:50 | 02-15-2022 20:51:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,669 | closed | Add register method to AutoProcessor | # What does this PR do?
This PR adds the `register` method to `AutoProcessor`, following the same APIs for configurations, feature extractors, models and tokenizers. | 02-15-2022 19:53:05 | 02-15-2022 19:53:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,668 | closed | Add push_to_hub method to processors | # What does this PR do?
This PR adds the `push_to_hub` API to all processors, following the same patterns as for configurations, feature extractors, models and tokenizers. | 02-15-2022 19:28:09 | 02-15-2022 19:28:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,667 | closed | Add image classification notebook | # What does this PR do?
Adds a link to the image classification notebook. | 02-15-2022 19:27:22 | 02-15-2022 19:27:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,666 | closed | Add Video Vision Transformer | # 🌟 New model addition
## Model description
The ViViT is a transformer based model similar to Vision Transformer but operating on videos. In their paper, the authors present 4 types of possible architecture, and 2 are considered successful (unfactorised and factorised encoder models). Paper available at https://arxiv.org/abs/2103.15691.
The original research uses flax, and mentions usage of weights from large pretrained image models (ViT). I currently have an implementation of model 1, analogous to existing ViTModel. I have also written a flax-to-pytorch weights conversion (to use weights from the original repo) and I have written the adaptation of ViT-based weights to ViViT weights (based on the methodology in the paper).
## Open source status
* [ ] the model implementation is available: Original implementation in Scenic (flax) https://github.com/google-research/scenic/tree/main/scenic/projects/vivit
* [ ] the model weights are available: Flax checkpoints and TF SavedModel present here https://github.com/google-research/scenic/blob/main/scenic/projects/vivit/README.md
* [ ] who are the authors: Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid @MostafaDehghani @anuragarnab | 02-15-2022 18:58:35 | 02-15-2022 18:58:35 | Would be super cool!<|||||>@jegork I am looking to use the Flax weights provided in the original ViViT repo in PyTorch too. Can you guide me how to convert the weights?<|||||>@akshatshah21 I still can't finish writing the tests for my pull request to add ViViT, but I have created a repo with my converter for you.
https://github.com/jegork/vivit-weights-converter<|||||>Thanks a lot @jegork !<|||||>@jegork is there any progress on adding vivit to huggingface/transformers? I need this model for my Ph.D. research and it would be super cool if I could use the hugging face version of it :)<|||||>Hello @fcakyon !
You're just on time! I have been really busy finishing my bachelor's and working, but I have just started to work on the pull request two days ago, and I hope to finish it today or tomorrow. Although I cannot give you an estimation of when it will be actually merged in the main branch, but in the worst case you would be able to use my fork until it is merged. I will tag you as soon as I have add a PR with a working version of the model (I will also add the model with the weights to the huggingface repo) <|||||>Amazing news @jegork! Will be following your progress :+1: <|||||>@fcakyon
I have added a pull request to add the model, however, I am still in the process of writing tests so it might take some time to get it into the release, but you could already try the model out using my fork.
You can install it by using
```sh
pip install git+https://github.com/jegork/transformers@add_vivit
```
To try it out, use the following code:
```python
from transformers import ViViTFeatureExtractor, ViViTForVideoClassification
feature_extractor = ViViTFeatureExtractor.from_pretrained("jegormeister/vivit-b-16x2-kinetics400")
model = ViViTForVideoClassification.from_pretrained("jegormeister/vivit-b-16x2-kinetics400")
```
Will really appreciate your feedback! |
transformers | 15,665 | closed | Fix dec_attn_mask in TFTransfoXLMainLayer | # What does this PR do?
Fix `dec_attn_mask` computation in `TFTransfoXLMainLayer`. See the section `Code Snippet to show the issue and the effect of the fix` for the details.
(This difference is more difficult for me to identify the cause).
**Some remarks**:
- If `self.same_length` is `False`, the PT & (original) TF implementations give the same results.
- In PT code, [self.mem_len](https://github.com/huggingface/transformers/blob/b87c044c79a408c0a1e7f7b046b5b4ce999c2d0e/src/transformers/models/transfo_xl/modeling_transfo_xl.py#L936) is used (when `self.same_length` is `True`). However, in TF code, `self.mem_len` is not used at all (inside `call`). --> First sign of inconsistency.
- For the case `self.mlen == mlen + 1` (and `self.same_length==True`), the PT & (original) TF give the same result.
- However, in general case, [torch.tril(all_ones, -mask_shift_len)](https://github.com/huggingface/transformers/blob/b87c044c79a408c0a1e7f7b046b5b4ce999c2d0e/src/transformers/models/transfo_xl/modeling_transfo_xl.py#L941) part in the PT code is not the same as [mask_l - mask_dia](https://github.com/huggingface/transformers/blob/b87c044c79a408c0a1e7f7b046b5b4ce999c2d0e/src/transformers/models/transfo_xl/modeling_tf_transfo_xl.py#L607) in the TF code.
## results after the fix
This inconsistency between PT / TF produces large difference in the (extended) PT/TF equivalence:
- diffs in attention outputs are as high as `0.03`. After this fix: diffs are in the range `1e-8 ~ 1e-7`
- diffs in hidden states are as high as `0.001`. After this fix: in the range `1e-7 ~ 2e-6`
## Code Snippet to show the issue and the effect of the fix
```
import tensorflow as tf
import torch
def pt_attn(self_mlen, mlen, qlen, same_length):
"""TransfoXLModel.forward()'s `dec_attn_mask`
self_mlen: `self.mlen` int the original code
"""
klen = mlen + qlen
if same_length:
all_ones = torch.ones((qlen, klen), dtype=torch.uint8)
mask_len = klen - self_mlen
if mask_len > 0:
mask_shift_len = qlen - mask_len
else:
mask_shift_len = qlen
dec_attn_mask = (torch.triu(all_ones, 1 + mlen) + torch.tril(all_ones, -mask_shift_len)) ###[:, :, None] # -1
else:
dec_attn_mask = torch.triu(torch.ones((qlen, klen), dtype=torch.uint8), diagonal=1 + mlen) ###[:, :, None]
return dec_attn_mask.numpy()
def tf_attn(self_mlen, mlen, qlen, same_length):
"""Current TFTransfoXLMainLayer.call()'s `dec_attn_mask`
self_mlen: `self.mlen` int the original code
"""
attn_mask = tf.ones([qlen, qlen])
mask_u = tf.linalg.band_part(attn_mask, 0, -1)
mask_dia = tf.linalg.band_part(attn_mask, 0, 0)
attn_mask_pad = tf.zeros([qlen, mlen])
dec_attn_mask = tf.concat([attn_mask_pad, mask_u - mask_dia], 1)
if same_length:
mask_l = tf.linalg.band_part(attn_mask, -1, 0)
dec_attn_mask = tf.concat([dec_attn_mask[:, :qlen] + mask_l - mask_dia, dec_attn_mask[:, qlen:]], 1)
return tf.cast(dec_attn_mask, dtype=tf.int32).numpy()
def tf_attn_fixed(self_mlen, mlen, qlen, same_length):
"""Fixed `dec_attn_mask` in TFTransfoXLMainLayer.call()
self_mlen: `self.mlen` int the original code
"""
klen = mlen + qlen
dec_attn_mask = 1 - tf.linalg.band_part(
tf.ones([qlen, klen], dtype=tf.int32), -1, mlen
) # (q, q): diagonal with 1's
if same_length:
mask_len = klen - self_mlen
if mask_len > 0:
mask_shift_len = qlen - mask_len
else:
mask_shift_len = qlen
if mask_shift_len >= 1:
dec_attn_mask += 1 - tf.linalg.band_part(tf.ones([qlen, klen], dtype=tf.int32), mask_shift_len - 1, -1)
else:
dec_attn_mask += tf.linalg.band_part(tf.ones([qlen, klen], dtype=tf.int32), -1, -mask_shift_len)
return dec_attn_mask.numpy()
self_mlen = 4
mlen = 4
qlen = 3
same_length = True # False
pto = pt_attn(self_mlen, mlen, qlen, same_length)
print(pto)
tfo = tf_attn(self_mlen, mlen, qlen, same_length)
print(tfo)
tfo_fixed = tf_attn_fixed(self_mlen, mlen, qlen, same_length)
print(tfo_fixed)
```
This gives
```
# PT
[[1 0 0 0 0 1 1]
[1 1 0 0 0 0 1]
[1 1 1 0 0 0 0]]
# TF (diff. from PT)
[[0 0 0 0 0 1 1]
[1 0 0 0 0 0 1]
[1 1 0 0 0 0 0]]
# Fixed (same as PT)
[[1 0 0 0 0 1 1]
[1 1 0 0 0 0 1]
[1 1 1 0 0 0 0]]
```
I have further randomly generated the arguments for 1000 times, and the fixed TF version gives the same results as the PT version for all of them. | 02-15-2022 18:50:07 | 02-15-2022 18:50:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Could @gante or @Rocketknight1 give this a look?<|||||>I added a few remarks in the PR description. Maybe it will be helpful to get the ideas.<|||||>> I have further randomly generated the arguments for 1000 times, and the fixed TF version gives the same results as the PT version for all of them.
Thanks for the thorough testing <3 Can I merge the PR, @ydshieh ?<|||||>> > I have further randomly generated the arguments for 1000 times, and the fixed TF version gives the same results as the PT version for all of them.
>
> Thanks for the thorough testing <3 Can I merge the PR, @ydshieh ?
Yes, go ahead! Thanks for reviewing. |
transformers | 15,664 | closed | Why are certain models with a higher WER (on the eval set) performing better or as good as models with a lower WER – when tested on the test set? | Hello @patrickvonplaten and @anton-l !
I had two weird observations while evaluating my models, one of which showed that the model with the higher WER clearly outperformed the one with the lower WER (So strange!). In another scenario, when evaluated on the test set, the model with higher WER (on the eval set) fared "almost" as good as the model with low WER. I'm curious as to what the likely causes of these observations are.
I compared two models (Model A against model B) in each case. So, in a nutshell, I’ll be mentioning two models in each case for the sake of comparison.
**Outline of the issue:**
Case 1: Model A and Model B — trained for Guarani language (gn) —- CV8 dataset — model with higher WER on eval set clearly outperformed the model with lower WER, when tested on the test set.
Case 2: Different Model A and Model B — trained for Assamese language (as) —- CV8 dataset —- model with higher WER performed nearly as good as model with low WER, when tested on the test set.
**Detailed Description:**
**Case 1:**
Models A and B were trained for the Guarani (**gn**) – Common Voice 8 dataset.
**It should be noted that the library versions of Datasets and Transformers utilized while training both models were different.** **However, those models were tested in literally the same environment setting ie. the same version of Datasets and Transformers**. And it turns out that when tested on the test set, the model with a higher WER on the eval set outperformed the model with a lower WER.
Case 1: Model A: [https://huggingface.co/lgris/wav2vec2-xls-r-300m-gn-cv8](url)
Case 1: Model B: [https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1](url)
**Comparison in Tabular form**:

**Case 2**: In this case, different model A and model B were trained for Assamese language (as) – Common Voice 8 dataset.
It's worth noting that **the Datasets version used in both the cases were same ie. Datasets 1.18.3. However the Transformers version was different.** And it turns out that when tested on the test set, the model with a higher WER performed nearly as good as the model with a lower WER.
Case 2: Model A: [https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 ](url)
Case 2: Model B: [https://huggingface.co/infinitejoy/wav2vec2-large-xls-r-300m-assamese-cv8 ](url)
**Comparison in Tabular form**:

I'm curious as to what could be the underlying causes of such observations. I'd be glad if I get your inputs in this regard.
Currently, I'm a super novice in the ASR field. I literally calculated the WER for the very first time in this sprint itself, so please pardon me if I'm witnessing these observations due to some mistake on my part.
Thanks in advance...
| 02-15-2022 18:48:33 | 02-15-2022 18:48:33 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>Hi @LysandreJik!
I opened this issue since it was asked.. A bug in Datasets library was reported a few days back due to which we gotta new version now. I was actually referring to that bug when I was comparing the Datasets version utilised in different cases. This issue was opened in an attempt to detect the underlying causes of the models' anomalous behaviour.
But I'm guessing I didn't elucidate properly. Perhaps, I'd have used the default format to explain the issue, however I thought it's easier this way since there were too many variables.. :/<|||||>@LysandreJik :
Anyway, if you still believe that the aforementioned issue/post is forum- worthy, I have no objection in taking it down and publishing it on forums.
Multi-channel communication is turning chaotic for me as well. And I've a strong intuition that I've almost found the underlying causes of anomalous behaviour of certain models, albeit I'm still refining my points and I'm almost there... I might get my points evaluated on the forum itself! No worries!
Lastly, I apologize if the above issue is too naive/inappropriate for the platform. I've transitioned from the Patent Research and Analytics domain, and haven't utilized GitHub a lot more. However, I hope to do better in future.
Thanks!
<|||||>Ah, sorry, if it was asked by a member of the team then please ignore my message above. Thank you for doing the effort of opening an issue, even if I mentioned the forum would be a better place we definitely appreciate you trying to solve the problem. The issue is neither too naive nor inappropriate, it was just more likely to get an answer on the forum :)<|||||>Hey @drishtishrrma! I've re-evaluated the models over the past week and haven't found any significant model or config-related differences that would explain these weird coincidences :slightly_smiling_face:
However, note that the Guarani and Assamese languages have **very** little data in the Common Voice dataset (40m and 1h of validated data respectively). This means that the validation and testing metrics can be highly uncorrelated (e.g. there are just 293 train, 159 val and 93 test sentences for Guarani). In these low-resource conditions even changing the random seed before training can sometimes drastically change the final model.
Hopefully that makes sense, and thank you for the evaluation report! Let me know if you have any questions :slightly_smiling_face: <|||||>@LysandreJik : Yeah, I got your POV. Thanks for your advice, I appreciate it... <|||||>Hi @anton-l!
Thank you so much for your time and efforts, I appreciate it.
You've made some really excellent points there! Thank you so much!
I almost figured this out yesterday, but after re-reading your reply, I realized my analogy was slightly different in comparison to your ideas. It'd be fantastic if you could give me your thoughts on it, especially if you think it's a possibility. I was envisioning the aforementioned issue as being very analogous to a real-life scenario.
**My Analogy**: Assuming a student is preparing for a competitive exam in which he will be tested on five subjects (analogous to entire training dataset): General Science, Social Science, Maths, English, and Computer Science **(syllabus of all the subjects collectively is analogous to entire training dataset)**. Further assuming that the candidate excelled at Maths, General Science, and Social Science but struggled to learn/perform well/generalize in English and Computer Science (**struggled to generalize for a portion of entire training dataset**). **His overall score was good (let's compare this to WER now) due to high contribution of marks obtained in the subjects he learnt/generalized well during studying (training).** However, his overall good score (analogous to WER) doesn't tell a true story--- his overall good score (WER) conceals how poorly he performed in the remaining two subjects. **(analogous to a portion of the entire training dataset)** ---> And what if he's put into an environment where he's to deal with English and Computer Science only (his weak-links), this environment exposes his weak-links outrightly. Regardless of how good his overall score was previously, the WER will turn out to be poor if tested in an environment where the model (person) has to deal with its/his weak-links alone.
And this behaviour is more pronounced in a low-resource scenario since the model is less robust in such scenario because it was trained on very little data, and additionally it didn't even generalize well for a certain type of distribution during the training process.
Although the two models were trained on the same data, but due to different batch_sizes perhaps one generalized better!? They say, " small batch_size (s) achieve better generalization, perhaps it has something to do with it.
"random_seed" has a major role to play, in general; but since it was 42 in all the cases, the effect became uniformized. The default value is also 42, so it's a little less likely that it was tweaked before training. (At least, I didn't tweak at all; can't speak for the other two models that were trained by others.)
These are some random thoughts, please feel free to correct. I'd be wrong.
Thanks!
<|||||>@anton-l:
Small batch_size achieves better generalization, this claim needs to be tested a little. Recently I came across a couple tweets touting the benefits of larger batch_size. I'll be putting it to test soon.
**To simplify things and avoid making claims I haven't personally tested, I'd like to generalize the statement I stated above as follows:**
Because different batch size (s) were utilized, it's likely that one of the models generalized better for the limited data available.
**In short, let say, if a model generalized well for a portion of the training data which closely matches the environment in which we're gonna be testing, then obviously it'll perform better. On the contrary, it might not perform well if subjected to envio for which it didn't generalize well during the training.**
And it's very likely that if the aforementioned models are exposed to a different kind of environment (distribution), the outcome could be the polar opposite of what we've seen so far. This behaviour/effect is more pronounced in the low-resource scenario, imo.
|
transformers | 15,663 | closed | 🤗 Transformers **Trainer** API raises exception on train if triggered from an already started ML Flow run. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.2
- Platform: Linux-5.11.0-40-generic-x86_64-with-debian-10.9
- Python version: 3.7.10
- PyTorch version (GPU?): 1.11.0.dev20220112+cu111 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: parallel
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using is bert-base-cased to replicate the bug while using 🤗 Transformers **Trainer** API taken from the official [example](https://huggingface.co/docs/transformers/training#finetuning-in-pytorch-with-the-trainer-api).
The problem arises when using:
* [ ] the official example scripts:
* [x] my own modified scripts: Bug arises when i use the 🤗 Transformers **Trainer** API inside an already started ML Flow run.
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: GLUE on IMDB Dataset
* [ ] my own task or dataset:
## To reproduce
Steps to reproduce the behavior:
1. Initialise a ML Flow run.
2. Start a Training with 🤗 Transformers **Trainer** API inside the ML Flow run.
3. Causes an exception while the 🤗 Transformers **Trainer** API tries to create another ML Flow run while a ML Flow run is already started.
Exception :
```console
Exception: Run with UUID fad5d86248564973ababb1627466c0cb is already active. To start a new run, first end the current run with mlflow.end_run(). To start a nested run, call start_run with nested=True
```
_Code to replicate Exception:_
```python
from datasets import load_dataset
from transformers import AutoTokenizer
from transformers import AutoModelForSequenceClassification
from transformers import TrainingArguments
from transformers import Trainer
import mlflow
ML_FLOW_URI = '<put mlflow uri here>'
# # Setup ML Flow Run
mlflow.set_tracking_uri(ML_FLOW_URI)
def get_data():
# init Data, tokenzier, model
raw_datasets = load_dataset("imdb")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
# Tokenize data
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
return small_train_dataset, small_eval_dataset
small_train_dataset, small_eval_dataset = get_data()
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
# Init Training
training_args = TrainingArguments("test_trainer")
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset
)
with mlflow.start_run(run_name='my_main_run') as root_run:
trainer.train() # This line causes the Exception
```
_Line causing the exception:_
```python
with mlflow.start_run(run_name='my_main_run') as root_run:
trainer.train() # This line causes the Exception
```
_Traceback:_
```console
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/scripts/trainer_bug_replication.py", line 43, in <module>
trainer.train() # This line causes the Exception
File "/usr/local/lib/python3.7/site-packages/transformers/trainer.py", line 1308, in train
self.control = self.callback_handler.on_train_begin(args, self.state, self.control)
File "/usr/local/lib/python3.7/site-packages/transformers/trainer_callback.py", line 348, in on_train_begin
return self.call_event("on_train_begin", args, state, control)
File "/usr/local/lib/python3.7/site-packages/transformers/trainer_callback.py", line 399, in call_event
**kwargs,
File "/usr/local/lib/python3.7/site-packages/transformers/integrations.py", line 742, in on_train_begin
self.setup(args, state, model)
File "/usr/local/lib/python3.7/site-packages/transformers/integrations.py", line 718, in setup
self._ml_flow.start_run(run_name=args.run_name)
File "/usr/local/lib/python3.7/site-packages/mlflow/tracking/fluent.py", line 232, in start_run
).format(_active_run_stack[0].info.run_id)
Exception: Run with UUID cb409c683c154f78bdcd37001894ae7b is already active. To start a new run, first end the current run with mlflow.end_run(). To start a nested run, call start_run with nested=True
```
## Possible solution
When ML Flow is setup by default during the initialisation of the MLflowCallback (given mlflow is installed), the setup should have checked for already running ML Flow run and appropriately start a nested run. Starting a nested Run would help not hamper the logs of parent run already started by the author/user.
This can be fixed by replacing LINE 718 in integrations.py
```python
self._ml_flow.start_run(run_name=args.run_name)
```
with
```python
nested = True if self._ml_flow.active_run is not None else False
self._ml_flow.start_run(run_name=args.run_name, nested=nested)
```
I can raise a PR if needed :)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The 🤗 Transformers **Trainer** API should not raise exception if trainer is started inside an already running ML Flow run started by the user.
Rather as a user I would expect the 🤗 Transformers **Trainer** API to log a nested mlflow run if i have already started a parent run without interfering with my parent mlflow logs.
<!-- A clear and concise description of what you would expect to happen. -->
### Similar/Related Issues
https://github.com/huggingface/transformers/issues/11115
| 02-15-2022 18:14:36 | 02-15-2022 18:14:36 | Hi there! We don't maintain integrations with third-party libraries ourselves, so feel free to create a PR with the fix and make sure to tag the contributor who wrote this callback for review (@noise-field ) :-) |
transformers | 15,662 | closed | cannot import name 'CONFIG_MAPPING' from 'transformers' (unknown location) | trying using the Run_MLM.py script but got an error in importing modules
I installed the transformers package from the source
```
!git clone https://github.com/huggingface/transformers.git
%cd transformers
!pip install -e .
!pip install -r 'examples/pytorch/language-modeling/requirements.txt
```
any solutions ?
thanks in advance
| 02-15-2022 17:27:33 | 02-15-2022 17:27:33 | |
transformers | 15,661 | closed | GPT-2 pretrained model fails to load when TF v2 behaviour is disabled | I am trying to use GPT-2 in a codebase that is written for Tensorflow 1.x. However, I am running the code against TF 2.x installation binaries with `tf.disable_v2_behavior()` flag. Without this `tf.disable_v2_behavior()` flag, GPT-2 pretrained model loads fine, but the model fails to load if the flag is used. Here is my code :
```
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior() #works fine without this line
from transformers import TFGPT2Model
model = TFGPT2Model.from_pretrained('gpt2') #fails
```
Here is the error:
```
>>> TFGPT2Model.from_pretrained('gpt2')
2022-02-15 10:17:08.792655: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home1/07782/marefin/.local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1467, in from_pretrained
model(model.dummy_inputs) # build the network with dummy inputs
File "/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 783, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py", line 695, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/home1/07782/marefin/.local/lib/python3.8/site-packages/transformers/models/gpt2/modeling_tf_gpt2.py:628 call *
outputs = self.transformer(
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer_v1.py:763 __call__ **
self._maybe_build(inputs)
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer_v1.py:2084 _maybe_build
self.build(input_shapes)
/home1/07782/marefin/.local/lib/python3.8/site-packages/transformers/models/gpt2/modeling_tf_gpt2.py:241 build
self.wpe = self.add_weight(
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer_v1.py:441 add_weight
variable = self._add_variable_with_custom_getter(
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py:810 _add_variable_with_custom_getter
new_variable = getter(
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer_utils.py:127 make_variable
return tf_variables.VariableV1(
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/ops/variables.py:260 __call__
return cls._variable_v1_call(*args, **kwargs)
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/ops/variables.py:206 _variable_v1_call
return previous_getter(
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/ops/variables.py:199 <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/ops/variable_scope.py:2612 default_variable_creator
return resource_variable_ops.ResourceVariable(
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/ops/variables.py:264 __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/ops/resource_variable_ops.py:1584 __init__
self._init_from_args(
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/ops/resource_variable_ops.py:1722 _init_from_args
initial_value = initial_value()
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/keras/initializers/initializers_v2.py:413 __call__
dtype = _assert_float_dtype(_get_dtype(dtype))
/home1/07782/marefin/.local/lib/python3.8/site-packages/tensorflow/python/keras/initializers/initializers_v2.py:948 _assert_float_dtype
raise ValueError('Expected floating point type, got %s.' % dtype)
ValueError: Expected floating point type, got <dtype: 'int32'>.
```
I am using TF 2.5 with transformers v4.12.5 in CentOS 7. Is there any way around to make this work with TF v2 behavior disabled? | 02-15-2022 16:49:05 | 02-15-2022 16:49:05 | Hi @rifatarefin 👋 Sadly, we don't support TF 1.x, everything we've built was with TF 2.x in mind. This implies that the snippet you want to run shouldn't be able to and that you have four main alternatives to your problem:
1. Create two local environments, one with TF 1.x other with TF 2.x, load GPT-2 in your TF 2.x environment and make it communicate with your TF 1.x environment (which runs your model);
2. Use our [HF API](https://huggingface.co/inference-api) with GPT-2. It's essentially the same as the alternative above, except that you don't have the hassle of setting up the second environment;
3. Fork our code and modify GPT-2 so as to make it TF 1.x compatible. I have no guarantee this would work. If you do make it work, we'd be interested in knowing the solution;
4. Upgrade your project to TF 2.x. But since you are raising this issue, I'd assume it's a very hard task.<|||||>I found `TFGPT2Model` pretrained weights load with `tf.compat.v1.disable_eager_execution()`. The TF 1.x code didn't raise any error with this flag. Although not sure if I will get the desired output this way. Any suggestions @gante ?<|||||>Awesome -- if you didn't get any errors then it possibly works well 👌
To double-check, I'd suggest that you run two things after loading with `tf.compat.v1.disable_eager_execution()`:
1. See if you can reproduce the output of [our integration tests](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_gpt2.py#L430) locally (this would confirm the correctness of the model);
2. Try running inputs of variable length on the same loaded `model` object (TF 1.x is not as friendly with dynamic input lengths).
If these 2 tests run well, I'd say you should be able to pull it off, at least for forward passes 😉 If it fails on the 2nd test but passes the 1st, let me know, I may have a work-around.<|||||>Thank you so much! I will keep you posted @gante <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,660 | closed | [pipeline doc] fix api | Fixes borked API doc.
@sgugger | 02-15-2022 16:46:57 | 02-15-2022 16:46:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>actually, just noticed that the `revision` entry appeared twice! so removed the last one and adjusted the format for the first one. |
transformers | 15,659 | closed | Add section about doc testing | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds section about how to write doc examples and how to add them to the doctests.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-15-2022 13:51:24 | 02-15-2022 13:51:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,658 | closed | Add ONNX export for ViT | # What does this PR do?
This PR enables the export of Vision Transformers (ViT) to ONNX with the following features:
* `default`
* `image-classification`
To enable this new modality, I had to significantly refactor the internals of the ONNX exporter because we need a way to pass the feature extractor instead of the tokenizer.
Thanks to a tip from @LysandreJik I replaced the positional `tokenizer` argument in various functions with a new `preprocessor` argument that can be a tokenizer or feature extractor (and possibly a processor in future). This should guarantee backwards compatibility for users who chose to use the Python API instead of the `transformers.onnx` CLI.
## Usage
```python
import requests
import numpy as np
from PIL import Image
from onnxruntime import InferenceSession
from transformers import AutoConfig, AutoFeatureExtractor, AutoModelForImageClassification
# Export ViT checkpoint with image classification head
model_ckpt = "google/vit-base-patch16-224"
!python -m transformers.onnx --model={model_ckpt} --feature=image-classification onnx/
# Download an image of two cute cats - naturally ;-)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# Instantiate config and feature extractor
config = AutoConfig.from_pretrained(model_ckpt)
feature_extractor = AutoFeatureExtractor.from_pretrained(model_ckpt)
inputs = feature_extractor(image, return_tensors="np")
# Create ONNX Runtime session
session = InferenceSession("onnx/model.onnx", providers=["CPUExecutionProvider"])
outputs = session.run(["logits"], dict(inputs))
predicted_class_idx = np.argmax(outputs[0])
# Returns Predicted class: Egyptian cat
print("Predicted class:", config.id2label[predicted_class_idx])
```
Here's two Colab notebooks comparing the inference gains with ORT vs vanilla PyTorch (~20-30% faster on CPU, ~5% faster on GPU):
* [CPU notebook](https://colab.research.google.com/drive/1QCTNRsctMCdvMRiWmAnbEPHJ9rmkAbAP?usp=sharing)
* [GPU notebook](https://colab.research.google.com/drive/1SZrCqvJzm6z5xjb_01-fnmTk-boGYalA?usp=sharing)
## Todo
- [x] Add deprecation warning if user passes `tokenizer` as keyword argument
- [x] Run an inference test to see if we get any speed-up over vanilla PyTorch (maybe) | 02-15-2022 12:42:25 | 02-15-2022 12:42:25 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15658). All of your documentation changes will be reflected on that endpoint.<|||||>While testing this branch on Colab, I discovered a weird bug when trying to run inference in ONNX Runtime with `torch` v1.10.2:
```
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_42' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:42 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, std::vector<int64_t> &, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,197,768}, requested shape:{2,197,12,64}
```
Curiously, there is no problem running inference with `torch` v1.9, so something seems to have changed in the `torch` ONNX exporter in the latest version. I'm currently investigating what the source of the problem is ...<|||||>I've implemented a workaround for the problems with exporting dynamic axes in `torch` v1.10.x, as well as integrated all your feedback. Apart from that, I think this PR is ready for another round of review :)<|||||>Super happy to see this merged! 🤗 |
transformers | 15,657 | closed | Usage examples for logger | # What does this PR do?
This PR adds a couple of usage examples in the `logging` doc
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
| 02-15-2022 10:14:42 | 02-15-2022 10:14:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the comments! |
transformers | 15,656 | closed | Is it fine if we do not pass the optimizer through accelerator.prepare() in DDP? | Hello!
I remember that in previous versions of Transformers, the optimizer was not sent to any accelerator device for training (e.g., in older versions of the `run_glue.py` script). So, I wonder if it is fine if we do not pass the optimizer through accelerator.prepare() in DDP in the latest version of Transformers (which means we only send the model and dataloaders to prepare())? This would help solve an OOM issue in my code.
| 02-15-2022 10:02:03 | 02-15-2022 10:02:03 | cc @sgugger <|||||>It seems weird that not passing it would avoid an OOM error, as it doesn't do anything special apart handling mixed precision (most of the work is for PyTorch XLA, so mostly for TPUs). For regular GPU training, it should be fine not to send the optimizer, except if you use mixed precision.
Out of curiosity, what is the command giving you an OOM with this script?<|||||>@sgugger: You are right, I still got an OOM error after training for a while.
I am fine-tuning a `bert-base` model repeatedly on different datasets (like in continual learning or self-training) using 4 NVIDIA GeForce GTX 1080 Ti GPUs. The procedure is as follows:
```
for i in range(max_iterations):
finetuning_procedure
```
The `finetuning_procedure` is similar to `run_glue_no_trainer.py` that uses accelerator and DDP. If I call it from a bash script, everything works perfectly. However, when I convert it into a python function and call it within the for loop, I keep getting OOM issues after training for several iterations.
```
RuntimeError: CUDA out of memory. Tried to allocate X MiB (GPU A; B GiB total capacity; C GiB already allocated; D MiB free; E MiB cached)
```
I figured out the problem is that the memory accumulates across iterations and at some point, I’ll get an OOM error. Calling from a bash script works well because it clears up the memory after the script gets done.
I found this thread https://github.com/huggingface/transformers/issues/1742, which suggests deleting all tensor objects after each iteration then collecting garbage and emptying the GPU cache:
```
delete objects
gc.collect()
torch.cuda.empty_cache()
```
It helped a bit but I still got an OOM error after training for like 10 iterations. Then, I tried another trick that moves the tensor objects to CPU before deleting them based on this thread https://github.com/pytorch/pytorch/issues/31252
```
gc.collect()
for obj in gc.get_objects():
if not isinstance(obj, torch.Tensor):
continue
obj.data = obj.data.cpu()
if isinstance(obj, torch.nn.Parameter) and obj.grad is not None:
obj.grad.data = obj.grad.cpu()
del obj
torch.cuda.empty_cache()
```
It helped a lot and I was able to train up to 40 iterations, but then I got the following error
```
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 1 (pid: 137092) of binary: /home/tuvu/anaconda3/envs/bert/bin/python
Traceback (most recent call last):
File "/home/tuvu/anaconda3/envs/bert/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/tuvu/anaconda3/envs/bert/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/tuvu/anaconda3/envs/bert/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
main()
File "/home/tuvu/anaconda3/envs/bert/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/tuvu/anaconda3/envs/bert/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/tuvu/anaconda3/envs/bert/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/home/tuvu/anaconda3/envs/bert/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/tuvu/anaconda3/envs/bert/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=======================================================
selftraining.py FAILED
-------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
-------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-02-17_11:54:34
host : node132.cm.cluster
rank : 1 (local_rank: 1)
exitcode : -9 (pid: 137092)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 137092
```
It did not throw up an OOM error like before but I feel like it's still about memory (exitcode: -9).
@sgugger Any insight on this would be much appreciated! Thanks! <|||||>I would strongly recommend not doing a loop on a function for training but launching various scripts (which can be automated in a bash loop as well), as it's very hard to keep track of all objects that Python might allocate memory for and not release.
The `Accelerator` has a method to clean up a bit its internal references, but the code you suggested should already clean up everything, so I don't have a better suggestion.<|||||>@sgugger: Gotcha! Thanks for the suggestion!<|||||>>
I'm running into a similar issue with memory growing across iterations. Obviously, your suggestion speaks directly to this. Can I ask why you think the several-scripts method is better? I mean, if it works, that's an argument by itself. But I'd be curious to hear if you have further thoughts.<|||||>FYI, I fixed my memory issue by using the [accelerator.free_memory()](https://huggingface.co/docs/accelerate/accelerator#accelerate.Accelerator.free_memory) call at the end of my method. Hopefully, this will help future people searching for memory issues when using accelerators<|||||>Yes, [accelerator.free_memory()](https://huggingface.co/docs/accelerate/accelerator#accelerate.Accelerator.free_memory) also solved the problem for me.<|||||>@sgugger: Hi Sylvain, this pull request https://github.com/huggingface/transformers/pull/16738 implements self-training for text-classification tasks. It uses [accelerator.free_memory()](https://huggingface.co/docs/accelerate/accelerator#accelerate.Accelerator.free_memory) to release all references to the internal objects stored and call the garbage collector. Could you help review when you have a chance? |
transformers | 15,655 | closed | [SpeechEncoderDecoder] Make sure no EOS is generated in test | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes the flaky test:
```tests/test_modeling_speech_encoder_decoder.py::Wav2Vec2Speech2Text2::test_encoder_decoder_model_generate```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-15-2022 07:59:47 | 02-15-2022 07:59:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The reason for the flakiness comes from this recent change in `generate()`. <|||||>Thanks for fixing! |
transformers | 15,654 | closed | LayoutLMv2Model can not be imported from transformers | Hi @NielsRogge
Getting error when calling LayoutLMv2Model class with `from transformers import LayoutLMv2Model` as instructed at the [official documentation](https://huggingface.co/docs/transformers/model_doc/layoutxlm)
`ImportError: cannot import name 'LayoutLMv2Model' from transformers (unknown location)`
It should be under **transformers/models/layoutlmv2/modeling_layoutlmv2.py**
I have `transformers` version: 4.5.1:
| 02-15-2022 04:15:07 | 02-15-2022 04:15:07 | The issue has been resolved. LayoutLMv2Model class exist in `transformers=4.16.2` version<|||||>what about installing from the source
```
!git clone https://github.com/huggingface/transformers.git
%cd transformers
!pip install -e .
```
i tried to run run_mlm.py but got many errors in importing the modules
|
transformers | 15,653 | closed | Updated the RAG training with latest Pytorch Lightning library and the RAY | # What does this PR do?
Updated the RAG script with the latest libraries.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? not applicable
## Who can review?
@patrickvonplaten
| 02-15-2022 04:06:36 | 02-15-2022 04:06:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@shamanez are you planning to also update the end2end rag example? <|||||>> @shamanez are you planning to also update the end2end rag example?
Yes soon. I am also thinking of merging RAG with [Retro-like](https://deepmind.com/research/publications/2021/improving-language-models-by-retrieving-from-trillions-of-tokens) training. |
transformers | 15,652 | closed | add a network debug script and document it | As discussed on slack this PR adds a new network debug script to uncover DDP issues and documents it.
@sgugger | 02-15-2022 03:51:03 | 02-15-2022 03:51:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,651 | closed | Add a missing space in a deprecation message | null | 02-15-2022 03:50:36 | 02-15-2022 03:50:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,650 | closed | Inference API with GPT2 {"error": "Unknown error"} | ### Who can help
@Narsil
@LysandreJik
## Information
When I try to use the Inference API with gpt2, I am receiving a response saying there is an `Unknown error`. I have used the same codebase for a few months now, initially it was working, then the API started returning errors about cuda (only when using gpu), then the API was working again, now the API is returning `{'error': 'Unknown error'}` for both CPU and GPU inference.
## To reproduce
headers = {"Authorization": f"Bearer {BEARER}"}
API_URL = "https://api-inference.huggingface.co/models/gpt2"
data = json.dumps({"inputs": 'Here is a sentence that I like', "parameters":{"num_return_sequences":1, "max_length":1},"options": {"wait_for_model": False, "use_cache": False, "use_gpu":False}})
response = requests.request("POST", API_URL, headers=headers, data=data)
test_output = json.loads(response.content.decode("utf-8"))
## Expected behavior
Returns the generated completion by gpt2
| 02-14-2022 23:55:19 | 02-14-2022 23:55:19 | ```python
import json
import os
API_TOKEN = os.getenv("API_TOKEN")
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://api-inference.huggingface.co/models/gpt2"
def query(payload):
data = json.dumps(payload)
response = requests.request("POST", API_URL, headers=headers, data=data)
return json.loads(response.content.decode("utf-8"))
data = query(
{
"inputs": "The answer to the universe is",
"parameters": {"num_return_sequences": 1, "max_length": 1},
"options": {"wait_for_model": False, "use_cache": False, "use_gpu": False},
}
)
print(data)
```
This seems to work currently. Are you sure about your Bearer ? (It shouldn't produce `unknown error` anyway, but I fail to see what's wrong in your code).
Can you share a little your environmnet too ? (Python version, requests version)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,649 | closed | Allow custom code for Processors | # What does this PR do?
This PR allows code for dynamic processors, with the same API as configurations, feature extractors, models and tokenizers.
There needs to be a slight change in the way the `auto_map` field is stored in the dynamic tokenizers: we need to have a dict there like in the other configs. This PR changes the way new dynamic tokenizer configs are saved with backward compatibility to read the legacy formats. This is also checked with a test loading a dynamic tokenizer with the new and old format. | 02-14-2022 20:29:28 | 02-14-2022 20:29:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Very clean! Thanks |
transformers | 15,648 | closed | TrOCR not working anymore after 4.16.2 update | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.2
- Platform: Colab
- Python version: 3.8?
- PyTorch version (GPU?): latest in colab
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
### Who can help
Models:
TrOCR @NielsRogge
I believe something has happened to the Trainer class that now makes evaluation of NielsRogge's TrOCR model run into a bug.
I have reproduced his notebook here, and encountered the bug when using the latest transformers version.
https://colab.research.google.com/drive/1ZcwbH_JzBMT7M84eDkUw9eJRqBxyofkB?usp=sharing
| 02-14-2022 19:07:38 | 02-14-2022 19:07:38 | Thanks for reporting, this has been fixed in #15603.
For now, you have to install from source to have it:
`pip install git+https://github.com/huggingface/transformers`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,647 | closed | Enable `image-segmentation` on `AutoModelForSemanticSegmentation` | # What does this PR do?
Enable `image-segmentation` on `AutoModelForSemanticSegmentation`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 02-14-2022 17:47:50 | 02-14-2022 17:47:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> I think it's a mistake to group together tasks as different as instance segmentation and semantic segmentation in the same pipeline, especially when we haven't fully defined the instance segmentation outputs yet.
It's a long standing debate between us, and I think we just need to define what are pipelines really and have a proper design doc on which we agree on.
I think I understand your position, but I think we differ on the premises.
IMHO, what defines a pipeline is IN ORDER:
- Achieve a given task, targeted towards non-ml python programmers, hiding as many ML details without misleading users (transformers have a limited range of attention, so this shouldn't be hidden).
- A task, is a set of input/output. For simplicity, aim to core python types first `str`, `dict`, `list`, `int`, `float`. Use "accepted standards" everywhere possible (`PIL.Image` for Python images for instance).
- Compromise as little as possible performance wise.
- Parameters should enable advanced usage/tweaks only. Users shouldn't have to use them on standard usage.
That means:
- `AutoModelForXXX` 1-1 mapping with a task is not something which makes sense in general (`automatic-speech-recognition` does ASR both for CTC, and Seq2seq, and users shouldn't have to care about the difference between those two things).Input: `audio` -> output: `str` *is* what defines the pipeline here.
>
> It's as if we grouped token classification and text classification in the same task for NLP, just because they both have classification in their names, in my opinion
Well `token-classification` is the same thing as `text-segmentation` actually (it's just not being used as a name by the community). If we follow the guidelines I suggest, the pipeline should actually never mention `tokens` (it's an ML concept). What non ML users should care about is "part of text" (because "words" is not necessarily what you are looking for
My personnal picture is that more generally, for an input set K ⊂ Ω:
classification are applications such that `K -> [0; n]`
segmentation : ` K -> {M, M ⊂ K}` (parts of K)
generation: `K -> {K_i, K_i ⊂ Ω}` (new members of Ω)
This applies to Audio, Image, and Text pretty fine I think.
>
> It would be cleaner to have a separate `semantic-image-segmentation`.
<|||||>> A task, is a set of input/output. For simplicity, aim to core python types first str, dict, list, int, float. Use "accepted standards" everywhere possible (PIL.Image for Python images for instance).
This is the crux of the problem. Instance segmentation and semantic segmentations have very different kinds of outputs. Grouping them in the same pipeline will be confusing to the user and it make the code unreadable, so I really don't see why we should group them in the same alias.<|||||>Major modifications with breaking changes linked to a good compromise we found with @sgugger .
My major opposition for creating a separate pipeline were:
- It adds complexity on non ML (users need to know about the differences between 200 tasks) with sometimes only minor differences in what a user actually cares for (where is the tumor located on the image, and how big is it ?)
- It forces users to use odd matrix `[[0, 2, 9], [0, 2, 2], ...]` which are not real images but classes ids 2D array, which are confusing when you try to display them as they are pure black when displayed in colab for instance
@sgugger main opposition to a single pipeline
- We're trying to force feed different outputs in the same object where they are not really the same thing
- Discussing further, the real problem is the confusion that stems from `parameters` which could be different for `Semantic` vs `Image segmentation` or `panoptic`. This is definitely an issue with parameters in general for pipelines.
- `score` is a good example, in semantic segmentation it's not really meaningful, people mostly look at individual pixel logits, not really at the full cluster
- semantic segmentation can output a single mask for 2 persons, it could be confusing.
- Consensus, was to really aim on outputting at "non ML" objects.
Most ML users that want the details, should be able to use the raw AutoModelForXX, or even override the pipelines to modify subtle things. It might be something to explore, document more in the future, but the core target for pipeline are non ML users, so we're ok giving up some something to get simplicity.
- `scores` don't make sense for semantic, so don't attempt to give them for semantic.
We're also working on a pipeline guidelines, to help guide us in decision, we'll work on this a bit internally, and release later on a wider audience to get community feedback.
```
- Removed the string compressed PNG string, that's a job for and API. `transformers` 's users
stay in python land.
- Removed the `score` for semantic segmentation. It has hardly a meaning
on its own in this context.
- Don't include the grayscale with logits for now (which could enable
users to get a sense of confidence). Might be done later.
- Don't include the surface of the mask (could be used for sorting by
users, to filter out small masks). It's already calculable, and
it's easier to add later, than to add now and break later if we need.
```<|||||>@sgugger Since this contains a breaking change, should I wait after the release the make the change or not ?<|||||>I see this more as a bug fix than a breaking change, but let's see what @LysandreJik thinks.
Lysandre, for context, the pipeline used to return string for masks (so they can be used in the inference API) and it would return mask as PIL images now. I think it's completely okay since it makes the output of the pipeline more readable. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.