repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 14,739 | closed | Fixing tests for Perceiver | # What does this PR do?
- Do not run image-classification pipeline (_CHECKPOINT_FOR_DOC uses the checkpoint for
langage, which cannot load a FeatureExtractor so current logic fails).
- Add a safeguard to not run tests when `tokenizer_class` or
`feature_extractor_class` **are** defined, but cannot be loaded
This happens for Perceiver for the "FastTokenizer" (which doesn't exist
so None) and FeatureExtractor (which does exist but cannot be loaded
because the checkpoint doesn't define one which is reasonable for the
said checkpoint)
- Added `get_vocab` function to `PerceiverTokenizer` since it is used by
`fill-mask` pipeline when the argument `targets` is used to narrow a
subset of possible values.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 12-13-2021 09:31:30 | 12-13-2021 09:31:30 | > Do not run image-classification pipeline (_CHECKPOINT_FOR_DOC uses the checkpoint for
language, which cannot load a FeatureExtractor so current logic fails).
Ok but the Perceiver has 3 variants (`PerceiverForImageClassificationLearned`, `PerceiverForImageClassificationFourier`, `PerceiverForImageClassificationConvProcessing`) that should work with the image classification pipeline. So with the current logic, we can't test them?
These 3 checkpoints each have a feature extractor defined (I've uploaded the `preprocessor_config.json` to the hub for these checkpoints).<|||||>@NielsRogge , I added some slow tests for now to make sure we can run the pipeline.
Ideally we would be able to run the `run_pipeline_test` but the config are different in each case.
At least for now we have some proof that it works.<|||||>And readded the fast tests too.
It works by using `update_config_with_model_class` in the model_tester. Not sure it's the best way, but there's definitely a dependency between the ModelClass and the desired `config.d_model`.<|||||>For readers: merged #14745 to skip perceiver tests while we work on this PR.<|||||>Could you please rebase on `master` and ensure everything is green before merging? Thanks!<|||||>OK, the moon-landing failing tests are back up ! |
transformers | 14,738 | closed | Mention no images added to repository | Mention that images shouldn't be added to the repository as it will otherwise significantly weigh down the repository. | 12-13-2021 08:50:53 | 12-13-2021 08:50:53 | WIll merge this as a first step!<|||||>Merged too soon? We need a concrete destination for images that we all share.<|||||>Here it is: https://huggingface.co/datasets/huggingface/documentation-images<|||||>but it's not part of this PR that was merged. i.e. @NielsRogge's suggestion wasn't integrated into the doc.
additionally please please keep these instructions in `docs/README.md` - thank you! |
transformers | 14,737 | closed | Add support to DistilBertLMHeadModel | # 🚀 Feature request
## Background
Hi, I am the first author of the paper ["**TSDAE**: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). The work proposed a new method, TSDAE to train unsupervised-sentence-embedding models with denoising **autoencoder** loss on top of BERT-like PLMs. The code of TSDAE has been integrated into the SBERT repo and one can find the source file [here](https://github.com/UKPLab/sentence-transformers/blob/master/sentence_transformers/losses/DenoisingAutoEncoderLoss.py) and the example training script [here](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE).
## What is missing
As one can tell from the "autoencoder" in the name, TSDAE needs a decoder (i.e. `LMHead`) to work. However, the popular [**DistilBERT**](https://github.com/huggingface/transformers/blob/master/src/transformers/models/distilbert/modeling_distilbert.py) in the current HF-transformers **does not support it as a decoder** (i.e. missing something called `DistilBertLMHeadModel`).
Previously, I **used to bypass** this issue with [my personal extension](https://gist.github.com/kwang2049/1f0e1f0ce119456284c0af048ba097a7) and one can find the [discussion/issues](https://github.com/UKPLab/sentence-transformers/issues/962) in the SBERT repo about this. Recently, as users reported, I have just found **this way is not really maintainable and continuable**: HF-transformers keeps updating the code all the time and I need to update the extension accordingly then.
## Goal
Add support to **DistilBertLMHeadModel**. Actually, **I have made the changes in my fork** (for PyTorch) and it just needs to be merged (of course, after the code review from the HF community).
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
With support to DistilBertLMHeadModel, all the decoding functions will be possible for all the amazing distilbert models, e.g. the TSDAE learning, seq2seq tasks, etc.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I have forked the latest master branch and added the corresponding support for PyTorch. One can check the pull-request for details. Really appreciate it if this issue can be solved.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 12-13-2021 00:49:54 | 12-13-2021 00:49:54 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,736 | closed | XLNetLMHeadModel has no attribute 'from_config' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.0
- Platform: Linux-4.15.0-20-generic-x86_64-with-LinuxMint-19-tara
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
### Who can help
@patrickvonplaten
## Information
Model I am trying to train a XLNet model from scratch to portuguese but when i run this script: https://github.com/huggingface/transformers/blob/v4.8.2-release/examples/pytorch/language-modeling/run_plm.py i get this error:
Traceback (most recent call last):
File "run_plm.py", line 498, in <module>
main()
File "run_plm.py", line 330, in main
model = XLNetLMHeadModel.from_config(config)
AttributeError: type object 'XLNetLMHeadModel' has no attribute 'from_config'
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
portuguese wikipedia dataset
| 12-12-2021 22:01:29 | 12-12-2021 22:01:29 | Good catch @josutk,
I think we should change this line to:
```
model = XLNetLMHeadModel(config)
```
would you like to open a PR for this? :-)<|||||>@patrickvonplaten I have open PR. |
transformers | 14,735 | closed | Avoid using tf.tile in embeddings for TF models | # What does this PR do?
Some TF models use
```
position_embeds = tf.gather(params=self.position_embeddings, indices=position_ids)
position_embeds = tf.tile(input=position_embeds, multiples=(input_shape[0], 1, 1))
```
which assume that `position_ids` has size 1 along batch dimension. If users don't specify `position_ids`, we create it
(before using it)
```
if position_ids is None:
position_ids = tf.expand_dims(
tf.range(start=past_key_values_length, limit=input_shape[1] + past_key_values_length), axis=0
)
```
which will have batch size 1. However, in `INPUTS_DOCSTRING`, it specifies the shape to be `(batch_size, seq_len)`.
If a user provides a full batch for `position_ids` (although this is very unlikely), `tf.tile` shouldn't be used here.
This PR fixes this issue.
## Who can review?
@Rocketknight1 | 12-12-2021 16:42:13 | 12-12-2021 16:42:13 | I love it, thank you for doing this! I wonder if there's a reason for using the `Add()` layers like that originally? It feels very odd.<|||||>Either way, it's a straightforward change and I'm happy to merge as-is, so let me know once you're ready.<|||||>> I love it, thank you for doing this! I wonder if there's a reason for using the `Add()` layers like that originally? It feels very odd.
I also feel the same, and don't know why `Add()` is used. I removed it here also because `Add()` requires the shape to be the same, including batch dim (and won't work after I removed tf.tile).
The PR is ready. I can rebase on master to see if I can make the tests green.<|||||>Failed tests are irrelevant to this PR. Let me know if you prefer to wait and rebase later.<|||||>No, we're seeing those tests on every PR. I'm happy to merge now - let me know whenever the PR is done!<|||||>> No, we're seeing those tests on every PR. I'm happy to merge now - let me know whenever the PR is done!
It's is done. You can merge. Thanks!<|||||>Done! |
transformers | 14,734 | closed | Fix: change tooslow to slow | # What does this PR do?
`test_saved_model_creation_extended` in `TFViT` test is currently `tooslow`. It was copied from TF common test.
#14415 move it from common test to TF core test + changed it to `slow`.
This PR fix this inconsistency.
## Who can review?
@Rocketknight1 | 12-12-2021 15:35:57 | 12-12-2021 15:35:57 | LGTM! |
transformers | 14,733 | closed | Do we have pretrain (from scratch) scripts for BART? | I just want to train a new bart on my own dataset. But it seems that there is no pretrain scripts.
| 12-12-2021 13:14:43 | 12-12-2021 13:14:43 | Oh, sorry, the answer is listed here: [https://github.com/huggingface/transformers/issues/5096#issuecomment-645860271](url) |
transformers | 14,732 | closed | Add ability to get a list of supported pipeline tasks | # What does this PR do?
This change adds the ability to programmatically get a list of supported pipeline tasks.
Currently, when new users want to know which tasks are supported in a pipeline, we have to reference the [Pipelines documentation](https://huggingface.co/docs/transformers/main_classes/pipelines) or parse the SUPPORTED_TASKS and TASK_ALIASES dictionaries. Being able to programmatically get a list of supported pipeline tasks would be more convenient since it requires less context switching and parsing.
Example usage:
```
>>> from transformers.pipelines import get_supported_tasks
>>> print(*get_supported_tasks(), sep=“\n”)
audio-classification
automatic-speech-recognition
conversational
feature-extraction
fill-mask
image-classification
image-segmentation
ner
object-detection
question-answering
sentiment-analysis
summarization
table-question-answering
text-classification
text-generation
text2text-generation
token-classification
translation
zero-shot-classification
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik @Narsil
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-12-2021 00:46:05 | 12-12-2021 00:46:05 | Thanks for the PR!
Off-topic from the actual PR but for background information, this is where the "canonical" list of pipelines (even outside of transformers) supported by the HuggingFace hub is stored: https://github.com/huggingface/huggingface_hub/blob/main/widgets/src/lib/interfaces/Types.ts<|||||>Thanks for the reviews, everyone! And, wow, that was so fast! 🤩
|
transformers | 14,731 | closed | Possible simplification in T5 docs | I noticed the following nested list comprehension being recommended in the [T5 docs:](https://huggingface.co/docs/transformers/model_doc/t5)
```python
# replace padding token id's of the labels by -100
labels = [
[(label if label != tokenizer.pad_token_id else -100) for label in labels_example] for labels_example in labels
]
labels = torch.tensor(labels)
```
I believe it's possible to simplify above to:
```python
labels = torch.tensor(labels)
labels[labels == tokenizer.pad_token_id] = -100
```
I would be happy to make a PR if the change is deemed relevant, though I'm not sure where the docs are located so please let me know if needed :) | 12-11-2021 22:05:55 | 12-11-2021 22:05:55 | Hey,
that's indeed a way more readable way of replacing padding tokens by -100.
The docs of T5 is located [here](https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/t5.rst). Note that we are migrating all docs from .rst files to Markdown. But for now, you can add just update the code example.<|||||>Thanks for sharing! I made a PR |
transformers | 14,730 | closed | Fix docs pointers for DeepSpeed | In this [section](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#deepspeed), the reader is pointed to several links that are broken:
* https://huggingface.co/docs/transformers/master/en/main_classes/trainer#deepspeed-trainer-integration
* https://huggingface.co/docs/transformers/master/en/main_classes/trainer#deepspeed-installation
* https://huggingface.co/docs/transformers/master/en/main_classes/trainer#deepspeed-multi-gpu
* https://huggingface.co/docs/transformers/master/en/main_classes/trainer#deepspeed-one-gpu
*https://huggingface.co/docs/transformers/master/en/main_classes/trainer#deepspeed-config-passing
* etc
All of them should be changed like this:
`https://huggingface.co/docs/transformers/master/en/main_classes/trainer#(.*)` --> `https://huggingface.co/docs/transformers/master/en/main_classes/deepspeed#(.*)`
I would do it myself, but I see that the code uses e.g. `:ref:`deepspeed-notebook` and not sure how to point to a section in another page 🤷♂️ It looks like a trivial change though
| 12-11-2021 19:07:11 | 12-11-2021 19:07:11 | If I'm not mistaken this is patched on `master` and will be reflected soon on the documentation cc @sgugger <|||||>@LysandreJik At least in this line: https://github.com/huggingface/transformers/blame/master/docs/source/main_classes/trainer.rst#L506, it seems to be still outdated. Are you sure this is patched?<|||||>I will push a patch later today. |
transformers | 14,729 | closed | Add `ElectraForCausalLM` -> Enable Electra encoder-decoder model | # What does this PR do?
This PR adds `ElectraForCausalLM` and thus enable to use `Electra` model in the encoder-decoder setting.
Fixes #14269
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
**Additional context:** I've already tried to fine-tune Electra model as an encoder-decoder model on our use case and it proved to be working. | 12-11-2021 19:03:09 | 12-11-2021 19:03:09 | This looks good to me as well! Regarding the docs, could you try to follow @sgugger 's suggestion [here](https://github.com/huggingface/transformers/pull/13967#issuecomment-999565664) on how to fix it?
Let me know if you need any help :-)<|||||>Merging as failing test has been fixed on master and other failures are unrelated. Thanks a lot for you work @stancld ! |
transformers | 14,728 | closed | Include documentation on linking to transformers docs with Intersphinx | I'm one of the maintainers of PyKEEN and we're adding a feature where you can use various transformers in combination with knowledge graph embedding models in https://github.com/pykeen/pykeen/pull/652. We're using sphinx to build our documentation and have out-links to various packages like numpy, but we haven't been able to figure out how to use the intersphinx configuration to link to transformers' docs. I spent a bit of time googling, but did not find any answers. Is this even possible? If so, could you include some information either here or in your docs themselves on what the objects.inv URL for your docs? | 12-11-2021 14:05:35 | 12-11-2021 14:05:35 | Note that we do not use Sphinx anymore for our doc, since moving to https://huggingface.co/docs/transformers (we use raw markdown with some custom features on top of it, implemented in Svelte)
However (not at all familiar with Intersphinx so I'm shooting in the dark here) maybe there's a specification that we could implement to facilitate linking to specific classes or methods inside our documentation? Does anyone know more about this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I got this working by adding the following to my `intersphinx_mapping` in the sphinx `conf.py`:
```python
intersphinx_mapping = {
'datasets': ('https://huggingface.co/docs/datasets/master/en/', None),
'transformers': ('https://huggingface.co/docs/transformers/master/en/', None),
}
```
The key is that there's an `objects.inv` file in each of these directories, which is what sphinx needs. |
transformers | 14,727 | closed | Difference between vocab_size in model T5forConditionalGeneration “t5-small” and its corresponding Tokenizer “t5-small” | ## Environment info
- `transformers` version: 4.13.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (False)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @patil-suraj, @LysandreJik
## Information
Model I am using (Bert, XLNet ...): BART and T5
The problem arises when using:
* [Yes] the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
from transformers import BartTokenizer, BartForConditionalGeneration
t5model = T5ForConditionalGeneration.from_pretrained('t5-small')
t5tokenizer = T5Tokenizer.from_pretrained('t5-small')
print("For T5:")
print("Tokenizer vocab_size: {}".format(t5tokenizer.vocab_size))
print("Model vocab size: {}\n".format(t5model.config.vocab_size))
bartmodel = BartForConditionalGeneration.from_pretrained('facebook/bart-base')
barttokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
print("For BART:")
print("Tokenizer vocab_size: {}".format(barttokenizer.vocab_size))
print("Model vocab size: {}".format(bartmodel.config.vocab_size))
```
Current Output
```
For T5:
Tokenizer vocab_size: 32100
Model vocab size: 32128
For BART:
Tokenizer vocab_size: 50265
Model vocab size: 50265
```
## Expected behavior
Both - the model and the corresponding tokenizer should ideally have the same `vocab_size`. Incase the additional `vocab_size` in the model is to incorporate prefix-tokens, why should they not be included in the corresponding Tokenizer.
| 12-11-2021 05:15:06 | 12-11-2021 05:15:06 | My main goal is to `add_tokens` to the T5Tokenizer, but not sure how the `vocab_size` of the model would get affected due to this, after resize_embeddings. It would help to get an idea of how to incorporate the above.<|||||>See #4875 |
transformers | 14,726 | closed | [CI/pt-nightly] switch to cuda-11.3 | This PR updates the pt-nightly ci job to use a more recent cuda-11.3 for the docker image and pytorch install.
@LysandreJik | 12-11-2021 01:16:46 | 12-11-2021 01:16:46 | |
transformers | 14,725 | closed | [doc] document MoE model approach and current solutions | This PR gives the reader an idea to travel a novel MoE Transformer architecture for a much faster training, albeit much larger memory requirements.
While most of the solutions are in the Tensorflow/TPU land, Deepspeed has introduced a pytorch solution based on Megatron-Deepspeed.
@sgugger | 12-11-2021 00:24:13 | 12-11-2021 00:24:13 | As a reference for future PRs, it would be nice to put the images in a dataset hosted on hf.co - putting them in the repo weighs down the repo significantly (+72kb here) and it can't be removed from the history without altering it. Images in the docs can also be taken from a dataset, see SegFormer for example: https://huggingface.co/docs/transformers/master/en/model_doc/segformer#overview
See how it was added here: https://github.com/huggingface/transformers/blame/master/docs/source/model_doc/segformer.rst#L44-L45
I have opened this PR to clarify this in the contribution guide: https://github.com/huggingface/transformers/pull/14738 <|||||>@LysandreJik maybe it would make it more obvious to everyone if we moved all images there. They won't disappear form history but there is no `imgs` folder, no one will be tempted to add one?<|||||>Yes, that's a very good call!<|||||>That's a very good point about not blowing up the repo, @LysandreJik!
Will this actually work well though? Aren't browsers these days capable of blocking remote urls that don't coincide on the same domain for security conscious users?<|||||>We have tested it with @NielsRogge: it works well for the segformer above, and everything resides on huggingface.co.
I'll take care of migrating all the images (including the one above) in a PR this week, will ensure that the images are correctly linked.<|||||>> We have tested it with @NielsRogge: it works well for the segformer above, and everything resides on huggingface.co.
You have tested it how? Do you have the security tools installed? Like Privacy Badger plugin and alikes?
For context/background - many websites install all kinds of tracking devices, which can be an external js code, an image, etc. So tools like Privacy Badger will often skip any such 3rd party urls if configured so. Which may catch images as well.
Since I have Privacy Badger I can test this if you give me an example of a doc that includes remote images.
> I'll take care of migrating all the images (including the one above) in a PR this week, will ensure that the images are correctly linked.
Thank you!
<|||||>We empirically tested it - I use Brave which blocks ads and trackers but I don't use Privacy Badger, so I'd be happy for you to tell me how it works for you!
Please let me know if you see the images here: https://github.com/LysandreJik/transformers/blob/documentation-images/docs/source/parallelism.md#zero-data-parallel
and here: https://huggingface.co/docs/transformers/master/en/model_doc/segformer#overview
I have no issue with those, which are currently hosted in the following dataset: https://huggingface.co/datasets/huggingface/documentation-images/tree/main
If this is sufficient testing for you, let me know and I'll finalize/open the PR to remove the images and rely on the external dataset instead.<|||||>Thank you for sharing how you tested it, @LysandreJik
and I tested with Privacy Badger - all works - it doesn't consider those as trackers.
So all is good!
Thank you!<|||||>Perfect, thank you for testing it out! |
transformers | 14,724 | closed | Update transformers metadata | # What does this PR do?
This PR adds a job that will auto-update the repository [transformers-metadata](https://huggingface.co/datasets/huggingface/transformers-metadata) each time it's necessary. | 12-10-2021 23:24:50 | 12-10-2021 23:24:50 | |
transformers | 14,723 | closed | Add Speaker Diarization and Verification heads | # What does this PR do?
This adds the Audio Frame classification (equivalent of token classification) and X-vector (speaker embedding extraction) heads to `Wav2Vec2` and `UniSpeech-SAT` models. The target tasks for the heads are SUPERB's Speaker Diarization and Speaker Verification respectively. These were mainly motivated by `UniSpeech-SAT`, since the model performs better on those tasks, rather than on ASR.
Sources for the models and weights from the SUPERB's `s3prl` framework:
* `ModelForAudioFrameClassification`: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/diarization/model.py (with `use_rnn=False`)
* `ModelForXVector`: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/sv_voxceleb1/model.py#L261
The heads for both W2V2 and US-SAT were finetuned from scratch, since the official checkpoints use custom (better) heads that are incompatible with SUPERB's evaluation protocol.
| 12-10-2021 23:19:46 | 12-10-2021 23:19:46 | @patrickvonplaten the code is ready for a full review now. I'll post the model links and evaluation results (DER and EER metrics) shortly.<|||||>checking tomorrow after the `-large` checkpoint problem is solved :-) |
transformers | 14,722 | closed | Fix broken links to distillation on index page of documentation | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes broken links to DistilGPT2, DistilRoberta, and DistilmBERT on the homepage of documentation.

## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-10-2021 22:17:25 | 12-10-2021 22:17:25 | @sgugger Running that command modifies `index.mdx` and resets the links back to the broken link. I'm not sure why that's the case.

<|||||>Ah you need to fix them in the main README, the rest is updated from it.<|||||>@sgugger Thank you, that makes sense. Do I need to apply these also to the language-specific READMEs or are those auto-generated?

<|||||>If you run `make fix-copies`, all your changes should be propagated.<|||||>Hello @amitness, let us know if you'd like for us to run `make fix-copies` on your fork (and if so, please invite us as contributors so that we may push to it!)
Thanks!<|||||>@LysandreJik I have already run `make fix-copies` and there are no changes to any file after running it. The CI is still failing due to some other reason.

This [CI task](https://app.circleci.com/pipelines/github/huggingface/transformers/31179/workflows/4b5e0854-ff3f-4ccc-8669-1cf826a02e3d/jobs/321016) says the failure is due to code quality checks in docs, but my change is only switching the links in a README. Thoughts?<|||||>Ah, indeed the error says the following:
```
Traceback (most recent call last):
File "utils/style_doc.py", line 550, in <module>
main(*args.files, max_len=args.max_len, check_only=args.check_only)
File "utils/style_doc.py", line 538, in main
raise ValueError(f"{len(changed)} files should be restyled!")
ValueError: 1 files should be restyled!
Exited with code exit status 1
```
Can you run `make fixup`, which should take care of everything style-related?<|||||>@LysandreJik Thank you. That worked and I have pushed all changes.<|||||>Thanks again for fixing! |
transformers | 14,721 | closed | Fix special character in MDX | # What does this PR do?
In the `quicktour.mdx`, there are a few &lt; that should be < in some of the model outputs.
This will need to be cherry-picked when re-building the stable doc. | 12-10-2021 20:35:14 | 12-10-2021 20:35:14 | Merging to test the new automatic jobs to update notebooks and check the doc building is not broken :-)<|||||>:+1: |
transformers | 14,720 | closed | TF model cards | - Creating model cards for models trained with Keras!
- Model cards will automatically be created by the `PushToHubCallback` - the callback will peek at the metrics and accumulate information during the training run to enable this.
- Calling `model.push_to_hub()` will not, by default, create a model card. This method is cross-platform (it comes from the `PushToHubMixin`), and changing its behaviour would negatively affect `Trainer` users.
- Instead, Keras models now have a `create_model_card()` method if users want to create a model card outside of the `PushToHubCallback`. Because this method can't peek at the training history like the callback can, you need to pass the `History` object returned by `model.fit()` to create the card. | 12-10-2021 20:19:29 | 12-10-2021 20:19:29 | @sgugger all comments should be addressed and testing looks good. My main remaining concern is that `model.push_to_hub` does not generate a model card by default, because that method comes from the cross-platform `PushToHubMixin`. If you want, I can edit that method so that it creates a model card if you pass it a Keras model history? This would leave the behaviour for `Trainer` unchanged.<|||||>I'm fine with both options, so it's really up to what you think is best @Rocketknight1
Also, if you could rebase on master to fix the CI that would be great :-)<|||||>Will do both!<|||||>@LysandreJik I'm hoping to post an example with it when it's ready, but right now I'm having some issues with the method generating malformed YAML and they're a pain to track down! <|||||>@sgugger @LysandreJik This should be ready to go now - some tests are failing even after rebasing but they have nothing to do with this PR. Okay if I merge?<|||||>Yes you can! |
transformers | 14,719 | closed | Fixing tests for perceiver (texts) | # What does this PR do?
Fixes the tests by supporting `input_ids` on PerceiverForSequenceClassification&co.
If both `inputs` and `input_ids` are supplied, crash with an error.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-10-2021 18:33:27 | 12-10-2021 18:33:27 | |
transformers | 14,718 | closed | Automatically build doc notebooks | # What does this PR do?
This PR automatically build the notebooks associated to the doc pages and updates them when building the documentation. For now it relies on [this unmerged PR](https://github.com/huggingface/doc-builder/pull/50/files) on the doc-builder side, hence the specific branch (so I can test).
Once the PR there is merged we can remove the specific branch from the job.
As seen with @LysandreJik, I am going to merge this PR to test everything goes well and will revert if not :-) | 12-10-2021 17:28:14 | 12-10-2021 17:28:14 | |
transformers | 14,717 | closed | Support for Kaldi formatted Audio files, especially "segments" | # 🚀 Feature request
Kaldi is a popular open source speech recognition system that manages audio in a simple but unique format. It would be great to have some addition to the datasets object that would take a Kaldi directory and turn this into a dataset. Barring this, it would be very helpful to have a modification of the datasets audio class that would permit processing of parts of long audio files rather than have to create a new set of audio files from the larger set.
## Motivation
Would make it much easier to have cross system portability and allow HuggingFace to easily support a much larger collection of audio data.
| 12-10-2021 17:02:07 | 12-10-2021 17:02:07 | Thanks for your interest, @picheny-nyu!
Indeed, this could be quite useful. The logic can either be implemented in `huggingface/datasets` for each dataset in Kaldi format individually, or we could add an example script to `transformers/examples` that converts Kaldi datasets. Feel free to open a PR if you'd like to contribute those :)
Also pinging @patrickvonplaten since he has a bit more experience with Kaldi <|||||>Hey @picheny-nyu,
I very much agree that we should have better support to allow datasets of long audio files to be chunked and I agree that this should happen in `datasets`.
Regarding interoperability with Kaldi, I think it's a nice idea, but I'm a bit worried about whether this is very easy to do or not especially given that Kaldi is written in C++.
Kaldi is also not very actively maintained anymore I think as the original authors seem to focus more and more on Python-based libraries for ASR:
- https://github.com/lhotse-speech/lhotse
- https://github.com/k2-fsa/icefall
<|||||>Also cc @lhoestq <|||||>Thanks for the response. I was primarily thinking of kaldi input formats for waveforms and transcriptions, not kaldi functionality in general, though that would also be nice. They have some very nice features for organizing and processing input data that looks like it would not be too hard to reproduce inside huggingface processing. We could ask the kaldi authors if they plan to continue this in their new python version too. <|||||>Let's see what @lhoestq thinks here<|||||>Hi ! Yes feel free to open an issue in https://github.com/huggingface/datasets to discuss the loading of Kaldi structured datasets. We can imagine having something like
```python
from datasets import load_dataset
my_kaldi_dataset = load_dataset("kaldi_folder", data_dir="path/to/my/kaldi/data")
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>If someone else needs this, here is how I tackled the problem:
```python
import datasets
import kaldiio # https://github.com/nttcslab-sp/kaldiio
_DATA_DIR = "path/to/kaldi_dir"
_FILEPATHS = {
"train": {
"feats": "dump/train/feats.scp",
"texts": "data/train/text",
},
"valid": {
"feats": "dump/valid/feats.scp",
"texts": "data/valid/text",
},
"test": {
"feats": "dump/test/feats.scp",
"texts": "data/test/text",
},
}
class DatasetWithPrecomputedFeatures(datasets.GeneratorBasedBuilder):
def _split_generators(self, _):
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={"split": "train"},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
gen_kwargs={"split": "valid"},
),
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={"split": "test"},
),
]
def _generate_examples(self, split):
with open(os.path.join(_DATA_DIR, _FILEPATHS[split]["texts"])) as texts:
texts = dict(map(lambda s: s.strip().split(maxsplit=1), f))
featfile = os.path.join(_DATA_DIR, _FILEPATHS[split]["feats"])
feats_generator = kaldiio.load_scp(featfile)
for key, (uttid, transcript) in enumerate(texts.items()):
feats = feats_generator[uttid]
if feats.ndim > 2:
if feats.shape[0] != 1:
raise ValueError(f"Too many dimensions for {uttid}: {feats.shape}")
feats = feats.squeeze(0)
yield key, {"feats": feats, "text": transcript, "id": uttid}
```
<|||||>@qmeeus this looks great! Would you like to open a pull requests in `datasets` to add this dataset download file under `datasets/datasets/kaldi_folder`?
cc @polinaeterna @lhoestq |
transformers | 14,716 | closed | Adding support for multiple mask tokens. | # What does this PR do?
- Original implem: https://github.com/huggingface/transformers/pull/10222
When presented multiple masks, it's impossible to retrieve the conjugate probabilities.
Instead of trying to workaround that (see discussions in previous PR) this PR instead
just outputs the raw `top_k` proposition at each locus, since it gets trickier to find a good
proxy for "joint probabilities". Instead of trying to solve this impossible problem we simply
show exactly what the model outputs.
@naveenjafer mentionned as co-author since much of this PR was pulled from there.
This PR was resurrected partly because Perceiver (byte-level model) need to do this type of masking to be useful.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 12-10-2021 14:25:46 | 12-10-2021 14:25:46 | |
transformers | 14,715 | closed | Update bug-report.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Small update to have a clearer distinction between @patil-suraj and @patrickvonplaten responsability in Transformers.
Also added some new models.
@patil-suraj @LysandreJik @sgugger - is that ok for you?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-10-2021 14:08:32 | 12-10-2021 14:08:32 | Also @Narsil,
I've put you under generate now as well - is this ok? Think you know the method pretty well by now :-)<|||||>@patrickvonplaten I am ok with that ! |
transformers | 14,714 | closed | IterableDatasetShard should use per device batch size instead of real… | IterableDatasetShard should use per device batch size instead of real batch size | 12-10-2021 13:40:13 | 12-10-2021 13:40:13 | |
transformers | 14,713 | closed | [Adafactor] Fix adafactor | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11536
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-10-2021 12:07:31 | 12-10-2021 12:07:31 | I've tested this fix on the following script:
```
CUDA_VISIBLE_DEVICES="0" python run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="ntu-spml/distilhubert" \
--dataset_config_name="ab" \
--output_dir="./dummy" \
--overwrite_output_dir \
--num_train_epochs="3" \
--per_device_train_batch_size="4" \
--gradient_accumulation_steps="1" \
--learning_rate="5e-5" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--save_steps="500" \
--eval_steps="500" \
--logging_steps="1" \
--layerdrop="0.0" \
--save_total_limit="1" \
--mask_time_prob="0.3" \
--mask_time_length="10" \
--mask_feature_prob="0.1" \
--mask_feature_length="64" \
--freeze_feature_extractor \
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
--fp16 \
--group_by_length \
--do_train --do_eval \
--gradient_checkpointing \
--adafactor
```
and while this does **not** work on master:
```
File "/home/patrick/python_bin/transformers/trainer.py", line 1377, in train
self.scaler.step(self.optimizer)
File "/home/patrick/anaconda3/lib/python3.9/site-packages/torch/cuda/amp/grad_scaler.py", line 338, in step
retval = self._maybe_opt_step(optimizer, optimizer_state, *args, **kwargs)
File "/home/patrick/anaconda3/lib/python3.9/site-packages/torch/cuda/amp/grad_scaler.py", line 285, in _maybe_opt_step
retval = optimizer.step(*args, **kwargs)
File "/home/patrick/anaconda3/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
return wrapped(*args, **kwargs)
File "/home/patrick/anaconda3/lib/python3.9/site-packages/torch/optim/optimizer.py", line 88, in wrapper
return func(*args, **kwargs)
File "/home/patrick/python_bin/transformers/optimization.py", line 577, in step
update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col)
File "/home/patrick/python_bin/transformers/optimization.py", line 508, in _approx_sq_grad
return torch.mm(r_factor.unsqueeze(-1), c_factor.unsqueeze(0))
RuntimeError: self must be a matrix
```
it does work with this fix as proposed by https://github.com/huggingface/transformers/issues/11536#issuecomment-958971693
In general, I think we should not have this optimizer in `transformers` at all (like @sgugger I think), but for now it's better to have something working than something broken IMO. |
transformers | 14,712 | closed | DeBERTa V3 Fast Tokenizer | # 🚀 Feature request
Fast Tokenizer for DeBERTA-V3 and mDeBERTa-V3
## Motivation
DeBERTa V3 is an improved version of DeBERTa. With the V3 version, the authors also released a multilingual model "mDeBERTa-base" that outperforms XLM-R-base. However, DeBERTa V3 currently lacks a FastTokenizer implementation which makes it impossible to use with some of the example scripts (They require a FastTokenizer).
DeBERTa-V1 and DeBERTa-V2 both have a FastTokenizer implementation, it would be great to have one for DeBERTa-V3.
| 12-10-2021 11:48:12 | 12-10-2021 11:48:12 | From my understanding DeBERTa-V3 has the same tokenizer as V2.
The problem is Transformers DeBERTa-V2 does not have a FastTokenizer implementation,
so we need a request for V2 FastTokenizer.<|||||>That would be a nice community contribution! I'll add the `Good First Issue` label, and happy to guide anyone from the community to add a DeBERTa v2 fast tokenizer with @SaulLu!<|||||>I've looked into it, but the only "problem" I see is the own SPMTokenizer implementation (for the slow tokenizer). It basically wraps spm, but does some own preprocessing steps...
<|||||>Indeed, it would be great to have fast versions for the tokenizers of the models :
- deberta-v2
- deberta-v3
- mdeberta-v3
As @stefan-it raised, I think we are indeed missing some information to be able to build a rust version of the spm tokenizer used. I tried to have a quick look at the papers corresponding to each of the models (https://arxiv.org/abs/2006.03654 and https://arxiv.org/abs/2111.09543) and unfortunately, the tokenizer modeling is not explained in them.
As far as I know, there is no way to retro-engineer the spm binaries, but maybe I'm wrong! So, at least, I think we need to know the command used by the authors to train their tokenizer, I saw that several people asked for it on issues on their [repo](https://github.com/microsoft/DeBERTa) ([issue 1](https://github.com/microsoft/DeBERTa/issues/82), [issue 2](https://github.com/microsoft/DeBERTa/issues/38)) but the answer is not in it. So, the only tracks I see are 1) to bounce on the existing issues to indicate that we would also be interested in this information or 2) to contact directly the authors. Indeed, if we know the command that was used to train this tokenizer, we should be able to assemble the right tokenizer components to have a fast tokenizer!
Moreover, concerning mdeberta-v3, in their paper they mention that :
> We denote the model as mDeBERTa-base. We use the same SentencePiece vocabulary as mT5 which has 250k tokens.
But unfortunately the binaries [spiece.model](https://huggingface.co/google/mt5-base/blob/main/spiece.model) for mT5 and [spm.model](https://huggingface.co/microsoft/mdeberta-v3-base/blob/main/spm.model) for mdeberta-v3 are not equal <|||||>Gently removing the "Good First Issue" label while solving these issues. Also pinging the author @BigBird01 :)<|||||>There's also a vocab mismatch:
mDeBERTa: 251000
mT5: 250112
<|||||>I think the most comfortable solution would be to use the T5 Fast Tokenizer (using the mDeBERTa vocab file) - but there are a lot of details to be checked:
Token mapping is different (t5):
https://github.com/huggingface/transformers/blob/824fd44fc314b0d13b1ef91e8122d35d14af5ad9/src/transformers/models/t5/tokenization_t5.py#L113-L116
DeBERTa:
https://github.com/huggingface/transformers/blob/824fd44fc314b0d13b1ef91e8122d35d14af5ad9/src/transformers/models/deberta_v2/tokenization_deberta_v2.py#L108-L116
:thinking: <|||||>In DeBERTa tokenizer, we remapped [CLS]=>1, [PAD]=>0, [UNK]=>3, [SEP]=>2 while keep other pieces unchanged.
I checked [T5Converter](https://github.com/huggingface/transformers/blob/824fd44fc314b0d13b1ef91e8122d35d14af5ad9/src/transformers/convert_slow_tokenizer.py#:~:text=class%20T5Converter(SpmConverter)%3A), I think it should work by directly use T5Converter to convert deberta v2/v3 tokenizer to faster tokenizer, except for the post_processor part:
tokenizer.post_processor = processors.TemplateProcessing(
single="[CLS]:0 $A:0 [SEP]:0",
pair="[CLS]:0 $A:0 [SEP]:0 $B:0 [SEP]:0",
special_tokens=[
("[CLS]", self.original_tokenizer.convert_tokens_to_ids("[CLS]")),
("[SEP]", self.original_tokenizer.convert_tokens_to_ids("[SEP]")),
],
)
Thanks!
<|||||>Thank you so much for your super fast response @BigBird01 :heart_eyes: ! It's a great help to us!
It looks like all the information is there for this to be a good first issue :confetti_ball: !
So I'm putting the label back on and would be really happy to guide with a todo list and help the person who would like to take care of this new feature! :relaxed: <|||||>Hi all, thanks for all the information provided, I have written a Converter Class for DeBERTav2 and testing it manually myself the tokenization looks correct. However I need guidance on how could I write up a DeBERTav2TokenizerFast class so I can add tests! Really appreciate any guidance, thank you!<|||||>> In DeBERTa tokenizer, we remapped [CLS]=>1, [PAD]=>0, [UNK]=>3, [SEP]=>2 while keep other pieces unchanged.
>
> I checked [T5Converter](https://github.com/huggingface/transformers/blob/824fd44fc314b0d13b1ef91e8122d35d14af5ad9/src/transformers/convert_slow_tokenizer.py#:~:text=class%20T5Converter(SpmConverter)%3A), I think it should work by directly use T5Converter to convert deberta v2/v3 tokenizer to faster tokenizer, except for the post_processor part:
>
> tokenizer.post_processor = processors.TemplateProcessing( single="[CLS]:0 $A:0 [SEP]:0", pair="[CLS]:0 $A:0 [SEP]:0 $B:0 [SEP]:0", special_tokens=[ ("[CLS]", self.original_tokenizer.convert_tokens_to_ids("[CLS]")), ("[SEP]", self.original_tokenizer.convert_tokens_to_ids("[SEP]")), ], )
>
> Thanks!
Doing it the way you said I ran into this problem:
The class this function is called from is 'DebertaV2Tokenizer'.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "prepare_debertav3_data.py", line 89, in <module>
tokenizer = DebertaV2Converter(tokenizer).converted()
File "/search/odin/lida/.conda/envs/py36/lib/python3.6/site-packages/transformers/convert_slow_tokenizer.py", line 428, in __init__
with open(self.original_tokenizer.vocab_file, "rb") as f:
AttributeError: 'DebertaV2Tokenizer' object has no attribute 'vocab_file'<|||||>A possible solution is mentioned here https://www.kaggle.com/nbroad/deberta-v2-3-fast-tokenizer<|||||>> A possible solution is mentioned here https://www.kaggle.com/nbroad/deberta-v2-3-fast-tokenizer
@kpriyanshu256 , I made that and I got it from here 😉 <|||||>Adding fast tokenizer is not needed. Because this model does not even work for the core task it is supposed to work - the masked language model [MASK].<|||||>
> Adding fast tokenizer is not needed. Because this model does not even work for the core task it is supposed to work - the masked language model [MASK].
Am I right in understanding that the NER pipeline only returns character indexes for tagged spans when a fast tokenizer is used? This could be one advantage of implementing.<|||||>> Adding fast tokenizer is not needed. Because this model does not even work for the core task it is supposed to work - the masked language model [MASK].
Even if the language model weights are not included, the model can still be used for fine-tuning for tasks such as text classification, QA, NER, etc. Since a lot (all?) of the Hugging Face scripts require FastTokenizers, the fast tokenizer is very much needed.
It is very close to being merged to main: (see here https://github.com/huggingface/transformers/pull/15529)<|||||>@nbroad1881
I agree but also MLM task should be remove from the huggingface documentation or there should be warning that there are not weights for MLM task.
Are you sure these other tasks such as NER are going to be as good as in the PDF describing Deberta?
<|||||>Edit: v2 was, v3 wasn't
One other note is that v2 and v3 were not trained with MLM
On Sat, Apr 2, 2022 at 16:05 Oxi84 ***@***.***> wrote:
> @nbroad1881 <https://github.com/nbroad1881>
> I agree but also MLM task should be remove from the huggingface
> documentation or there should be warning that there are not weights for MLM
> task.
>
> Are you sure these other tasks such as NER are going to be as good as in
> the PDF describing Deberta?
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/14712#issuecomment-1086713611>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AF6TKFPLYWNPU2OBVLM5IRLVDCSA3ANCNFSM5JY6BIPA>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>The fast tokenizers are now merged to main through https://github.com/huggingface/transformers/pull/15529<|||||>why it is closed? the issue is not solved! It has errors when use return_offsets_mapping, could you open it?
I am using 4.18.0:
```
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v3-base", use_fast=True)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
>>> tokenizer.encode_plus('what it is', return_offsets_mapping=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py", line 2566, in encode_plus
**kwargs,
File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils.py", line 639, in _encode_plus
"return_offset_mapping is not available when using Python tokenizers. "
NotImplementedError: return_offset_mapping is not available when using Python tokenizers. To use this feature, change your tokenizer to one deriving from transformers.PreTrainedTokenizerFast. More information on available tokenizers at https://github.com/huggingface/transformers/pull/2674
```<|||||>@world2vec please update your transformers version, i don't think the pip has been updated to > 4.19.0 yet
```
>>> import transformers
>>> transformers.__version__
'4.21.0.dev0'
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v3-base", use_fast=True)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
>>> tokenizer.encode_plus('what it is', return_offsets_mapping=True)
{'input_ids': [1, 339, 278, 269, 2], 'token_type_ids': [0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1], 'offset_mapping': [(0, 0), (0, 4), (4, 7), (7, 10), (0, 0)]}```<|||||>It was added in 4.19 :hugs:
https://github.com/huggingface/transformers/releases/tag/v4.19.0 |
transformers | 14,711 | closed | Adding `Perceiver` to `AutoTokenizer`. | # What does this PR do?
Make
```python
tokenizer = AutoTokenizer.from_pretrained("deepmind/language-perceiver")
```
work.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 12-10-2021 11:35:06 | 12-10-2021 11:35:06 | |
transformers | 14,710 | closed | GPT-NEO Incosistent inference TPU vs GPU | Problem: On GPU with huggingface library i'm getting much weaker results compared to inference on TPU
Model: GPT-NEO 1.3B
Steps: 20750
Dataset: Books crawl (file ".txt" 240 Gb)
- I've prepared dataset and new Tokenizer according to [original GPT-NEO](https://github.com/EleutherAI/gpt-neo) guide.
- For 1.3B model i've requested from TFRC TPUv3-128 for 14 days.
- For inference on TPU i'm using [original GPT-NEO](https://github.com/EleutherAI/gpt-neo) repo colab code.
- Smaller models (125m and 300m parameters) outputs weaker results too.
- I've trained GPT-NEO 1.3B from scratch on my corpus
### Model on GPU initialization
configuration_xl = GPTNeoConfig(vocab_size=36384)
model_xl_our = AutoModelForCausalLM.from_pretrained("/content/drive/MyDrive/gpt3_XL_weights/model.ckpt-20750", from_tf=True, config=configuration_xl)
tokenizer_xl_our = Tokenizer.from_file("/content/byte-level-bpe.tokenizer.json")
### Generation on GPU is done with this parameters
output = model.generate(
input_ids=input_ids,
min_length=adjusted_min_length,
max_length=adjusted_max_length,
return_full_text=False,
do_sample=True,
# do_sample=True,
early_stopping=True,
num_beams=5,
top_k=5,
temperature=1.,
# temperature=0.7,
no_repeat_ngram_size=2,
# no_repeat_ngram_size=0,
top_p=0.98,
pad_token_id=1,
eos_token_id=0
)
### example prompt: "Современное поколение"
### TPU inference:
"Современное поколение «цивилизованных» стран вынесло на стартовые площадки фатального социального явления - «массовое жилищное строительство. Предпочтительным для индивидуально-дворовых жилищных кооперативов являются европейские жилищные районы, такие как Гаага, Манхэттен, Стамбул. В то время как в Европе, например, существовали многочисленные смешанные многоквартирные дома, владелец которых жил в гостевом домике, в России практически невозможно встретить квартиру в одноквартирном доме.
Инфраструктура в данной ситуации являет зримый образ человека как социальной единицы."
### GPU inference:
"Современное поколение.
\- А как же? - спросил он. - Я не понимаю, что это такое, - сказал он, и, как бы в шутку, но в голосе его голосе не было ни тени сомнения, ни малейшего сомнения в том, о чем-то, а не мог ли он сам по-другому?
Помогло быть. Но я не могу понять, почему он не может быть, я, если бы он был я. И все-таки не я и все же, потому что я - это я? Или, может ли я в этом мире, где-нибудь, в каком-либо месте, или в другом месте? И если да, то как это может случиться, чтобы я это был? А что же это значит для меня? Как это могло случиться то, кто же все это сделал? Что же в таком случае? Кто же ты такой, ты? Почему ты не можешь быть не ты, когда ты это сделать? Если бы ты был ты. Кто ты есть? Ты кто? Где ты и кто такой и что ты такое. Что ты делаешь? Зачем ты так говоришь. Как же так? Я же не такой ты говоришь, не так ли ты что ли. Ты? Не так же ли это ты кто ты такая. А ААА- - - не не же такой же как ты ты...
И если если так и если же если и так то что- то- это не то же. Так же и не потому, так. .. ?"
If you know little russian, you can see that on GPU GPT outputs infinite existential questions and gibberish.
For your convenience i'll leave link on google translate :)
https://translate.google.ru/
Thanks in advance! | 12-10-2021 05:15:33 | 12-10-2021 05:15:33 | cc @patil-suraj and @patrickvonplaten<|||||>Hey @sjeffry-o,
I'm not really sure what we can do here to help. Did you train a model and then when you run the model on inference on TPU it works well and the same model does not work well on GPU? Could you give us a reproducible code snippet that we can just run to reproduce the error? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,709 | closed | [WIP] add noisy-average new word embed init | # What does this PR do?
This WIP PR is intended to start a discussion around the default initialization of new word embeddings when the vocabulary of a pretrained LM is expanded.
A few existing github issues (e.g., #8472, #8039) bring up the issue that after adding new words to the `gpt2` model ("vocabulary expansion") (and before any finetuning), the model now _only_ generates the new words.
In a [blog post](https://nlp.stanford.edu//~johnhew//vocab-expansion.html), characterize when this happens (it doesn't happen for all models) -- it's due to the initialization of the new words' embeddings to small random noise. This can change the pretrained LM's distribution to a very large extent. In the blog post I demonstrate under what conditions this is the case.
I briefly also show that this "put all the probability on only the new words" behavior can also harm adaptation performance finetuning on a domain. I speculate it's because the first few gradient steps change the whole model just to remove probability from the new words!
I also show that averaging all existing embeddings to initialize the new words' embeddings guarantees that you avoid this behavior, by guaranteeing that you deviate very little from the pre-vocabulary expansion LM.
In this WIP PR, I demonstrate what this might look like on the PyTorch side, suggesting (1) giving users an explicit choice of option for how to initialize the embeddings, and (2) making "average all the existing embeddings + add noise" the default.
If you try out the following code under this commit, you'll see the differences in generation:
```
import transformers
tok = transformers.GPT2Tokenizer.from_pretrained('gpt2')
tok.add_tokens(['Aragorn', 'Frodo', 'Lothlorien'])
sent = 'I love dogs because they'
```
This replicates the current default behavior:
```
model = transformers.AutoModelForCausalLM.from_pretrained('gpt2')
model.resize_token_embeddings(len(tok), init_strategy='as_pretrained')
tok.decode(model.generate(**tok(sent, return_tensors='pt'), do_sample=True)[0])
> "'I love dogs because they Lothlorien Lothlorien Lothlorien..."
```
And this is under the "average the embeddings + add noise" behavior:
```
model = transformers.AutoModelForCausalLM.from_pretrained('gpt2')
model.resize_token_embeddings(len(tok), init_strategy='avg_emb')
tok.decode(model.generate(**tok(sent, return_tensors='pt'), do_sample=True)[0])
> "'I love dogs because they are amazing fun and not boring."
```
Naturally, we expect or intend to finetune these embeddings anyway, but I hope my blog post demonstrates that regardless, the choice of initialization can matter.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ WIP] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ todo if we move forward] Did you write any new necessary tests?
## Who can review?
Here's a few people who may be interested: @patrickvonplaten @n1t0
| 12-10-2021 00:53:05 | 12-10-2021 00:53:05 | Thanks for the pull request @john-hewitt - I'm inclined to add such a feature. I think it's a good addition and necessary since we've seen issues about this before.
What do you think @LysandreJik @sgugger ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,708 | open | [WIP] [performance doc] faster/leaner optimizers | documenting faster / leaner optimizers
TODO:
- add `--optim adamw_bnb_8bit` for HF Trainer.
| 12-09-2021 18:49:27 | 12-09-2021 18:49:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_14708). All of your documentation changes will be reflected on that endpoint. |
transformers | 14,707 | open | Erroneous 404 warning when using AutoTokenizer.from_pretrained | ## Environment info
- `transformers` version: 4.12.5
- Platform: Linux-5.11.0-41-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu)
- Jax version: 0.2.24
- JaxLib version: 0.1.73
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): any model
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ x] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
`tokenizer = transformers.AutoTokenizer.from_pretrained("bigscience/tr3m-1B3-pile-checkpoints", revision="global_step1500")`
An erroneous warning is displayed: `404 Client Error: Not Found for url: https://huggingface.co/bigscience/tr3m-1B3-pile-checkpoints/resolve/main/config.json`, hinting that the AutoTokenizer is looking in the wrong place (the master branch), ignoring the revision argument, and not finding the file. However, the tokenizer is correctly initialized.
## Expected behavior
No warning. | 12-09-2021 18:12:40 | 12-09-2021 18:12:40 | Seems to be working fine for me
```
Python 3.7.13 (default, Mar 28 2022, 07:24:34)
[Clang 12.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import transformers
>>> tokenizer = transformers.AutoTokenizer.from_pretrained("bigscience/tr3m-1B3-pile-checkpoints", revision="global_step1500")
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.74k/1.74k [00:00<00:00, 847kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.29M/1.29M [00:01<00:00, 944kB/s]
>>> transformers.__version__
'4.18.0.dev0'
```<|||||>Seems like the erroneous warning was redundant and removed since 0f71c2905 commit
```bash
Author: lewtun <[email protected]>
Date: Mon Feb 14 18:03:07 2022 +0100
Remove redundant error logging in from_pretrained() method (#15631)
* Remove error logging in from_pretrained() method
```
### bisect log (old -> the error msg displayed, new -> the error msg not displayed)
```bash
git bisect start
# old: [ef3cec0ca577e5950e42e8de1a2991b5dc85dfa6] Release: v4.12.5
git bisect old ef3cec0ca577e5950e42e8de1a2991b5dc85dfa6
# new: [31ec2cb2newfbdd4c1ac9c6c9b8a74e974984206] Release: v4.18.0
git bisect new 31ec2cb2newfbdd4c1ac9c6c9b8a74e974984206
# old: [62bf536631d22e28aaac4d19c4d0d901ebf015ad] Release v4.12.0
git bisect old 62bf536631d22e28aaac4d19c4d0d901ebf015ad
# old: [3385ca2582f47239efbc7fc45b044d02ff60f736] Change REALM checkpoint to new ones (#15439)
git bisect old 3385ca2582f47239efbc7fc45b044d02ff60f736
# new: [b7018abf3ce34ed9e2d7dddb5fcf3a2af27a37f8] TF: Unpack model inputs through a decorator (#15907)
git bisect new b7018abf3ce34ed9e2d7dddb5fcf3a2af27a37f8
# new: [3de12906c8e5e27b2216642f0496a3efda8c4edd] fix: hfdeepspeed config argument (#15711)
git bisect new 3de12906c8e5e27b2216642f0496a3efda8c4edd
# old: [eed3186b79d7ef9187113849f9ee7882bdd359fd] Trigger doc build
git bisect old eed3186b79d7ef9187113849f9ee7882bdd359fd
# new: [9eb7e9ba1d132eec947e95988f90ddc41e3bb65d] Fix ASR pipelines from local directories with wav2vec models that have language models attached (#15590)
git bisect new 9eb7e9ba1d132eec947e95988f90ddc41e3bb65d
# old: [8c03df101064f70e301dce54f76a9cb1f8e392aa] Rebase (#15606)
git bisect old 8c03df101064f70e301dce54f76a9cb1f8e392aa
# old: [2b8599b2df6a09f83bd8b19086f691a648af74cb] Fix a bug that ignores max_seq_len in preprocess (#15238)
git bisect old 2b8599b2df6a09f83bd8b19086f691a648af74cb
# new: [e314c19a3ff52b39f33453ab6c7f7b3c6c12413e] fix bug for the log of RNG states are not properly loaded exception. (#15638)
git bisect new e314c19a3ff52b39f33453ab6c7f7b3c6c12413e
# old: [b090b790228bbe420f1667f8b0335c8b8e5bb5eb] Make Swin work with VisionEncoderDecoderModel (#15527)
git bisect old b090b790228bbe420f1667f8b0335c8b8e5bb5eb
# new: [2e11a043374a6229ec129a4765ee4ba7517832b9] Register feature extractor (#15634)
git bisect new 2e11a043374a6229ec129a4765ee4ba7517832b9
# new: [0f71c290535672d7b0eff5858a3dc3d7bd07a983] Remove redundant error logging in from_pretrained() method (#15631)
git bisect new 0f71c290535672d7b0eff5858a3dc3d7bd07a983
# first new commit: [0f71c290535672d7b0eff5858a3dc3d7bd07a983] Remove redundant error logging in from_pretrained() method (#15631)
```
If you set logging level to DEBUG like below
```python
import transformers
import logging
logging.basicConfig(level=logging.DEBUG)
tokenizer = transformers.AutoTokenizer.from_pretrained("bigscience/tr3m-1B3-pile-checkpoints", revision="global_step1500")
```
since 0f71c290535672d7b0eff5858a3dc3d7bd07a983 commit (v4.16.0) will give
```bash
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/tokenizer_config.json HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/config.json HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/tokenizer_config.json HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/vocab.json HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/merges.txt HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/tokenizer.json HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/added_tokens.json HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/special_tokens_map.json HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/tokenizer_config.json HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/main/config.json HTTP/1.1" 404 0
```
but the previous versions give
```bash
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/tokenizer_config.json HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/config.json HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/tokenizer_config.json HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/vocab.json HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/merges.txt HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/tokenizer.json HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/added_tokens.json HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/special_tokens_map.json HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/tokenizer_config.json HTTP/1.1" 404 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /bigscience/tr3m-1B3-pile-checkpoints/resolve/main/config.json HTTP/1.1" 404 0
404 Client Error: Entry Not Found for url: https://huggingface.co/bigscience/tr3m-1B3-pile-checkpoints/resolve/main/config.json
```
but HTTPS connection to **/bigscience/tr3m-1B3-pile-checkpoints/resolve/global_step1500/config.json** was already successful according to the logging messages.
|
transformers | 14,706 | closed | [chinese wwm] load_datasets behavior not as expected when using run_mlm_wwm.py script | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.10.0+cu102 (True)
- Tensorflow version (GPU?): 2.4.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik @sgugger
## Information
Model I am using (Bert, XLNet ...): bert-base-chinese
The problem arises when using:
* [https://github.com/huggingface/transformers/blob/master/examples/research_projects/mlm_wwm/run_mlm_wwm.py] the official example scripts: `rum_mlm_wwm.py`
The tasks I am working on is: pretraining whole word masking with my own dataset and ref.json file
I tried follow the run_mlm_wwm.py procedure to do whole word masking on pretraining task. my file is in .txt form, where one line represents one sample, with `9,264,784` chinese lines in total. the ref.json file is also contains 9,264,784 lines of whole word masking reference data for my chinese corpus. but when I try to adapt the run_mlm_wwm.py script, it shows that somehow after
`datasets["train"] = load_dataset(...`
`len(datasets["train"])` returns `9,265,365`
then, after `tokenized_datasets = datasets.map(...`
`len(tokenized_datasets["train"])` returns `9,265,279`
I'm really confused and tried to trace code by myself but can't know what happened after a week trial.
I want to know what happened in the `load_dataset()` function and `datasets.map` here and how did I get more lines of data than I input. so I'm here to ask.
## To reproduce
Sorry that I can't provide my data here since it did not belong to me. but I'm sure I remove the blank lines.
## Expected behavior
I expect the code run as it should. but the AssertionError in line 167 keeps raise as the line of reference json and datasets['train'] differs.
Thanks for your patient reading! | 12-09-2021 17:47:05 | 12-09-2021 17:47:05 | This script is not an actively maintained example, so you should ping the original contributor for any question on it :-)<|||||>@julien-c <|||||>@hyusterr Sorry for late, I met the same question.
The file load by `load_dataset ()` is kind of diff from norm way, so just run the check as follows:
```python
for line in data:
assert len(line.splitlines()) == 1
```
It works for me, hope it could help.<|||||>thanks for your information! I will try it.<|||||>I found that it's "^]" making the load_dataset to split more lines than expected, e.g.
7-ELEVEN/ 提供 分享 統一 企業 董事長 羅智先 28日 表示 , 統一 超 去年 開 無 人 商店 「 X-STORE 」 , 並 不 是 要 節省 人力 成> 本 , 「 而是 預防 未來 台灣 找 不 到 服務 人員 」 ; 外界 關心 統一 超 有 很多 沒有 24 小時 經營 的 門市 , 他 說 , 「 我們 有 部 分 商店 沒 24 小時 營業 的 條件 , 其實 一 天 16 小時 都 很 夠 了 ^] 」 。
Thanks for your imformation!<|||||>I also meet the same problem. Hoping you can fix the code sometime. @wlhgtc <|||||>simply `splitlines()` can handle the problem<|||||>>
Thanks for your information. |
transformers | 14,705 | closed | Fix : wrong link in the documentation (ConvBERT vs DistilBERT) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #14667
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @mishig25
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-09-2021 15:49:42 | 12-09-2021 15:49:42 | |
transformers | 14,704 | closed | Fix typo in toctree | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-09-2021 14:23:37 | 12-09-2021 14:23:37 | regarding this conversation, every new doc page needs to be added to _toctree.yml (it serves the same purpose as this section of previous index.rst)
https://github.com/huggingface/transformers/blame/fc1d97f29d7b98e82ae17fc5ac49229e2859bcca/docs/source/index.rst#L603-L652
we can probably create some automatic script that adds a new model to toctree
@sgugger @LysandreJik <|||||>Indeed, @sgugger was mentioning the same just yesterday :)<|||||>Yes, I plan to add a failure to the doc-builder if there is a MDX generated that ends up not being in the index (like sphinx used to do). Finishing the conversion to notebook and will add that next :-) |
transformers | 14,703 | closed | Fix Perceiver tests | # What does this PR do?
This PR fixes the Perceiver tests by setting the appropriate device, and fixing a test for the tokenizer. | 12-09-2021 12:49:57 | 12-09-2021 12:49:57 | |
transformers | 14,702 | closed | MarianForCausalLM doc example not working | ## Environment info
- `transformers` version: 4.13.0.dev0
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.5
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Marian: @patrickvonplaten
## Information
Model I am using: `MarianForCausalLM`
In `modeling_marian.py`, there is a doc example for `MarianForCausalLM` that is not working. See `To reproduce`.
```
>>> from transformers import MarianTokenizer, MarianForCausalLM
>>> tokenizer = MarianTokenizer.from_pretrained('facebook/bart-large')
>>> model = MarianForCausalLM.from_pretrained('facebook/bart-large', add_cross_attention=False)
>>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
```
(Is it intended to use `facebook/bart-large` for `MarianForCausalLM`?)
## To reproduce
### Error 1
```
from transformers import MarianTokenizer, MarianForCausalLM
tokenizer = MarianTokenizer.from_pretrained('facebook/bart-large')
```
Gives error:
```
Traceback (most recent call last):
File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\tmp.py", line 3, in <module>
tokenizer = MarianTokenizer.from_pretrained('facebook/bart-large')
File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\src\transformers\tokenization_utils_base.py", line 1744, in from_pretrained
return cls._from_pretrained(
File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\src\transformers\tokenization_utils_base.py", line 1879, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\src\transformers\models\marian\tokenization_marian.py", line 148, in __init__
assert Path(source_spm).exists(), f"cannot find spm source {source_spm}"
File "C:\Users\33611\miniconda3\envs\py39\lib\pathlib.py", line 1072, in __new__
self = cls._from_parts(args, init=False)
File "C:\Users\33611\miniconda3\envs\py39\lib\pathlib.py", line 697, in _from_parts
drv, root, parts = self._parse_args(args)
File "C:\Users\33611\miniconda3\envs\py39\lib\pathlib.py", line 681, in _parse_args
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
### Error 2
```
from transformers import MarianTokenizer, MarianForCausalLM
model = MarianForCausalLM.from_pretrained('facebook/bart-large', add_cross_attention=False)
```
Gives error:
```
Traceback (most recent call last):
File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\tmp.py", line 7, in <module>
model = MarianForCausalLM.from_pretrained('facebook/bart-large', add_cross_attention=False)
File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\src\transformers\modeling_utils.py", line 1453, in from_pretrained
model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_state_dict_into_model(
File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\src\transformers\modeling_utils.py", line 1607, in _load_state_dict_into_model
raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for MarianForCausalLM:
size mismatch for decoder.embed_positions.weight: copying a param with shape torch.Size([1026, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
```
The same issue happened for
```
model = BlenderbotForCausalLM.from_pretrained('facebook/bart-large', add_cross_attention=False)
```
and
```
model = BlenderbotSmallForCausalLM.from_pretrained('facebook/bart-large', add_cross_attention=False)
```
in other doc examples.
### Checkpoint loading issue (using `Helsinki-NLP/opus-mt-en-fr`)
```
from transformers import MarianTokenizer, MarianForCausalLM
model = MarianForCausalLM.from_pretrained('Helsinki-NLP/opus-mt-en-fr', add_cross_attention=False)
```
Gives warnings:
```
Some weights of the model checkpoint at Helsinki-NLP/opus-mt-en-fr were not used when initializing MarianForCausalLM: ['model.encoder.layers.3.fc2.bias', 'model.encoder.layers.5.final_layer_norm.weight', 'model.encoder.layers.2.self_attn.q_proj.bias', 'model.encoder.layers.2.self_attn.out_proj.bias', 'model.encoder.layers.2.self_attn.k_proj.weight', 'model.encoder.layers.3.self_attn.q_proj.bias', 'model.encoder.layers.3.fc1.weight', 'model.encoder.layers.3.self_attn.k_proj.weight', 'model.encoder.layers.0.self_attn.q_proj.weight', 'model.encoder.layers.5.self_attn.v_proj.weight', ..........]
- This IS expected if you are initializing MarianForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing MarianForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of MarianForCausalLM were not initialized from the model checkpoint at Helsinki-NLP/opus-mt-en-fr and are newly initialized: ['lm_head.weight']
```
(It seems to me that something is wrong here and need some investigation)
## Expected behavior
The doc example should work with checkpoint loaded correctly.
| 12-09-2021 11:36:07 | 12-09-2021 11:36:07 | Similar issue for doc example in `PegasusForCausalLM`
```
Example::
>>> from transformers import PegasusTokenizer, PegasusForCausalLM
>>> tokenizer = PegasusTokenizer.from_pretrained('facebook/bart-large')
>>> model = PegasusForCausalLM.from_pretrained('facebook/bart-large', add_cross_attention=False)
```<|||||>Thanks for noticing this - the docstrings are incorrect. We should correct them. Feel free to open a PR - otherwise I'm happy to take a look next week :-)<|||||>WIP PR here - https://github.com/huggingface/transformers/pull/15079<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,701 | closed | Fix doc examples: ... takes no keyword arguments | # What does this PR do?
In some doc examples, there are
```
>>> y = XSoftmax.apply(x, mask, dim=-1)
```
which gives `TypeError: apply() takes no keyword arguments`.
This PR fixes it by
```
>>> # Specify the dimension to apply softmax
>>> dim = -1
>>> y = XSoftmax.apply(x, mask, dim)
```
## Who can review?
@LysandreJik | 12-09-2021 10:21:47 | 12-09-2021 10:21:47 | I tried to solve the merge conflicts but I ended up with code quality issues - do you mind taking a look when you have a minute? Sorry about that!<|||||>> I tried to solve the merge conflicts but I ended up with code quality issues - do you mind taking a look when you have a minute? Sorry about that!
Looks fine now :)<|||||>This is perfect, thanks @ydshieh! |
transformers | 14,700 | closed | Onnx enable tasks for supported models (part 2) | # What does this PR do?
This PR reapplies the reverted PR #14358, and solves the issues that caused the revert.
---
# What does this PR do?
This PR adds support for almost all the features available for already supported models.
Main contributions:
- `OnnxSeq2SeqConfigWithPast`: a new class inheriting from `OnnxConfigWithPast` designed specifically for seq2seq models, this should make things easier for the community to contribute.
- Tests refactoring and parameterization: now every (model, feature) export pair is tested, and is considered as a standalone test (compared to before when everything was considered to be one big test).
- A lot of new features (a feature is a task plus the choice or not to use `past_key_values`), that have been requested by the community (check the list of supported feautres below)
Features now supported:
- For BERT like models: default, sequence-classification, token-classification and question-answering (multiple-choice will be added later).
- For causal language models (GPT-2 and GPT-neo): default, default-with-past, causal-lm, causal-lm-with-past, sequence-classification and token-classification (only for GPT2).
- For Seq2Seq models (T5, BART, mBART):
- T5, BART, mBART: default, default-with-past, seq2seq-lm, seq2seq-lm-with-past
- BART, mBART: causal-lm, causal-lm-with-past, sequence-classification, question-answering
| 12-09-2021 10:09:51 | 12-09-2021 10:09:51 | Gently pinging @LysandreJik for his blessing on the latest round of changes :)<|||||>Does this ONNX conversion support `beam search` automatically for BART based summarizers? <|||||>> Does this ONNX conversion support `beam search` automatically for BART based summarizers?
Hi @sorenmc, no, you'll have to implement your own `.generate()` method for the ONNX models. There is a related feature request in the `optimum` library [here](https://github.com/huggingface/optimum/issues/55). In the meantime, you might be interested in checking out BART summarization example [here](https://github.com/huggingface/transformers/tree/master/examples/onnx/pytorch/summarization)<|||||>hi,
thanks HF team for your great support on this.
trying to export summarization bart
transformers.__version__ == 4.19.0.dev0
onnxruntime.__version__ == 1.11.1
`
from transformers import pipeline
model_name = 'lidiya/bart-base-samsum'
summarizer = pipeline("summarization", model=model_name, tokenizer=model_name)
`
`
from transformers import AutoConfig, AutoModelForSeq2SeqLM
from transformers.models.bart import BartOnnxConfig
config = AutoConfig.from_pretrained(model_name)
onnx_config = BartOnnxConfig(config, task="default")
print(onnx_config.outputs)
`
OrderedDict([('last_hidden_state', {0: 'batch', 1: 'decoder_sequence'})])
I am trying a few export options and non of the gives me the output from the decoder.
## option 1:
`
/Users/aavidan/envs/py39/bin/python3.9 -m transformers.onnx --model=lidiya/bart-base-samsum --feature=seq2seq-lm --atol=5e-5 onnx
`
## output 1:
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Using framework PyTorch: 1.10.2
Overriding 1 configuration item(s)
- use_cache -> False
/Users/aavidan/envs/py39/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
/Users/aavidan/envs/py39/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py:236: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attention_mask.size() != (bsz, 1, tgt_len, src_len):
/Users/aavidan/envs/py39/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py:267: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
/Users/aavidan/envs/py39/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py:907: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
Validating ONNX model...
-[✓] ONNX model output names match reference model ({'logits'})
- Validating ONNX Model output "logits":
-[✓] (2, 8, 50265) matches (2, 8, 50265)
-[✓] all values close (atol: 5e-05)
All good, model saved at: onnx/model.onnx
`
from onnxruntime import InferenceSession, SessionOptions, GraphOptimizationLevel
options = SessionOptions()
options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL
session = InferenceSession(
'onnx/model.onnx',
sess_options=options, providers=["CPUExecutionProvider"]
)
session.disable_fallback()
outputs = [i.name for i in session.get_outputs()]
feed_dict = summarizer.tokenizer(text)
feed_dict['decoder_input_ids'] = feed_dict['input_ids']
feed_dict['decoder_attention_mask'] = feed_dict['attention_mask']
feed_dict = {k: np.array([v]) for k, v in feed_dict.items()}
pred = session.run(None, feed_dict)
for i, p in enumerate(pred):
print(i, outputs[i], p.shape)
`
## printout -
0 logits (1, 228, 50265)
1 1209 (1, 228, 768)
`
summarizer.tokenizer.decode(pred[0][0].argmax(axis=-1), skip_special_tokens=True)
`
## what i get -
gives me back the input text, which basically means logits is simply the input_ids and I am guessing from the shape that 1209 is the encoded vectors for all tokens in the text input. if that is in fact the case, HOW DO I EXPORT THE base_model.decoder ?
## option 2:
`
from pathlib import Path
from transformers.convert_graph_to_onnx import convert
convert(framework="pt", model=summarizer.model, output=Path(f"onnx/lidiya_bart1.onnx"),
opset=11, tokenizer=summarizer.tokenizer, pipeline_name="summarization")
`
## this results in the following error -
using framework PyTorch: 1.10.2
found input input_ids with shape: {0: 'batch', 1: 'sequence'}
found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
found output output_0 with shape: {0: 'batch', 1: 'sequence'}
found output output_1 with shape: {0: 'batch', 2: 'sequence'}
found output output_1 with shape: {0: 'batch', 2: 'sequence'}
found output output_1 with shape: {0: 'batch', 2: 'sequence'}
found output output_1 with shape: {0: 'batch', 2: 'sequence'}
found output output_2 with shape: {0: 'batch', 2: 'sequence'}
found output output_2 with shape: {0: 'batch', 2: 'sequence'}
found output output_2 with shape: {0: 'batch', 2: 'sequence'}
found output output_2 with shape: {0: 'batch', 2: 'sequence'}
found output output_3 with shape: {0: 'batch', 2: 'sequence'}
found output output_3 with shape: {0: 'batch', 2: 'sequence'}
found output output_3 with shape: {0: 'batch', 2: 'sequence'}
found output output_3 with shape: {0: 'batch', 2: 'sequence'}
found output output_4 with shape: {0: 'batch', 2: 'sequence'}
found output output_4 with shape: {0: 'batch', 2: 'sequence'}
found output output_4 with shape: {0: 'batch', 2: 'sequence'}
found output output_4 with shape: {0: 'batch', 2: 'sequence'}
found output output_5 with shape: {0: 'batch', 2: 'sequence'}
found output output_5 with shape: {0: 'batch', 2: 'sequence'}
found output output_5 with shape: {0: 'batch', 2: 'sequence'}
found output output_5 with shape: {0: 'batch', 2: 'sequence'}
found output output_6 with shape: {0: 'batch', 2: 'sequence'}
found output output_6 with shape: {0: 'batch', 2: 'sequence'}
found output output_6 with shape: {0: 'batch', 2: 'sequence'}
found output output_6 with shape: {0: 'batch', 2: 'sequence'}
found output output_7 with shape: {0: 'batch', 1: 'sequence'}
ensuring inputs are in correct order
decoder_input_ids is not present in the generated input list.
generated inputs order: ['input_ids', 'attention_mask']
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [10], in <cell line: 6>()
3 from transformers.convert_graph_to_onnx import convert
5
----> 6 convert(framework="pt", model=summarizer.model, output=Path(f"onnx/lidiya_bart1.onnx"),
7 opset=11, tokenizer=summarizer.tokenizer, pipeline_name="summarization")
File ~/envs/py39/lib/python3.9/site-packages/transformers/convert_graph_to_onnx.py:395, in convert(framework, model, output, opset, tokenizer, use_external_format, pipeline_name, **model_kwargs)
393 # Export the graph
394 if framework == "pt":
--> 395 convert_pytorch(nlp, opset, output, use_external_format)
396 else:
397 convert_tensorflow(nlp, opset, output)
File ~/envs/py39/lib/python3.9/site-packages/transformers/convert_graph_to_onnx.py:285, in convert_pytorch(nlp, opset, output, use_external_format)
282 # PyTorch deprecated the `enable_onnx_checker` and `use_external_data_format` arguments in v1.11,
283 # so we check the torch version for backwards compatibility
284 if parse(torch.__version__) <= parse("1.10.99"):
--> 285 export(
286 nlp.model,
287 model_args,
288 f=output.as_posix(),
289 input_names=ordered_input_names,
290 output_names=output_names,
291 dynamic_axes=dynamic_axes,
292 do_constant_folding=True,
293 use_external_data_format=use_external_format,
294 enable_onnx_checker=True,
295 opset_version=opset,
296 )
297 else:
298 export(
299 nlp.model,
300 model_args,
(...)
306 opset_version=opset,
307 )
File ~/envs/py39/lib/python3.9/site-packages/torch/onnx/__init__.py:316, in export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
38 r"""
39 Exports a model into ONNX format. If ``model`` is not a
40 :class:`torch.jit.ScriptModule` nor a :class:`torch.jit.ScriptFunction`, this runs
(...)
312 model to the file ``f`` even if this is raised.
313 """
315 from torch.onnx import utils
--> 316 return utils.export(model, args, f, export_params, verbose, training,
317 input_names, output_names, operator_export_type, opset_version,
318 _retain_param_name, do_constant_folding, example_outputs,
319 strip_doc_string, dynamic_axes, keep_initializers_as_inputs,
320 custom_opsets, enable_onnx_checker, use_external_data_format)
File ~/envs/py39/lib/python3.9/site-packages/torch/onnx/utils.py:109, in export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
104 if use_external_data_format is not None:
105 warnings.warn("`use_external_data_format' is deprecated and ignored. Will be removed in next "
106 "PyTorch release. The code will work as it is False if models are not larger than 2GB, "
107 "Otherwise set to False because of size limits imposed by Protocol Buffers.")
--> 109 _export(model, args, f, export_params, verbose, training, input_names, output_names,
110 operator_export_type=operator_export_type, opset_version=opset_version,
111 do_constant_folding=do_constant_folding, example_outputs=example_outputs,
112 dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs,
113 custom_opsets=custom_opsets, use_external_data_format=use_external_data_format)
File ~/envs/py39/lib/python3.9/site-packages/torch/onnx/utils.py:728, in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, use_external_data_format, onnx_shape_inference)
726 if dynamic_axes is None:
727 dynamic_axes = {}
--> 728 _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
730 graph, params_dict, torch_out = \
731 _model_to_graph(model, args, verbose, input_names,
732 output_names, operator_export_type,
(...)
735 training=training,
736 dynamic_axes=dynamic_axes)
738 # TODO: Don't allocate a in-memory string for the protobuf
File ~/envs/py39/lib/python3.9/site-packages/torch/onnx/utils.py:1314, in _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
1312 for i, x in enumerate(value):
1313 if not isinstance(x, int):
-> 1314 raise ValueError("The type of axis index is expected to be an integer")
1315 if x in value_dict:
1316 warnings.warn("Duplicate dynamic axis index {} was provided for input {}."
1317 .format(x, key))
ValueError: The type of axis index is expected to be an integer
btw, same error when trying to export only the decoder using
`
convert(framework="pt", model=summarizer.model.base_model.decoder, output=Path(f"onnx/lidiya_dec.onnx"),
opset=11, tokenizer=summarizer.tokenizer, pipeline_name="summarization")
`
## option 3:
`
torch.onnx.export(
summarizer.model,
(inputs['input_ids'], inputs['attention_mask']),
'onnx/lidiya_torch_onnx_exp.onnx',
opset_version=11, )
`
## what i get -
like option 1, successfully exports the encoder (I assume by looking at the exported layers shapes). I still have an issue exporting the decoder.
btw, I saw a bunch of references of how to implement the beam serach, however all links gives are broken/NR, so please can you re-post link to that as well?
thanks a lot!
|
transformers | 14,699 | closed | Fix doc examples: KeyError | # What does this PR do?
The doc examples in `BlenderbotSmall` have
```
>>> inputs = tokenizer([UTTERANCE], return_tensors='pt')
>>> inputs.pop("token_type_ids")
```
However, `BlenderbotSmallTokenizer` doesn't give `token_type_ids`, and cause `KeyError` here.
This PR fix this.
## Who can review?
@patrickvonplaten @patil-suraj | 12-09-2021 10:02:40 | 12-09-2021 10:02:40 | |
transformers | 14,698 | closed | Fix doc examples: cannot import name | # What does this PR do?
In doc examples in some PT model files, there are `cannot import name` error. This PR fixes this issue.
## Who can review?
@patrickvonplaten
## Question:
@patrickvonplaten , in `modeling_unispeech.py`, there are examples using
```
from transformers import UniSpeechSatFeatureExtractor
```
And in some docstring, there are
```
:class:`~transformers.UniSpeechSatProcessor`
```
However, these can't be imported
```
ImportError: cannot import name 'UniSpeechSatFeatureExtractor' from 'transformers' (/home/ydshieh/Desktop/ydshieh/transformers/src/transformers/__init__.py)
```
Is there any reason to make these objects invisible to `transformers`? | 12-09-2021 09:41:29 | 12-09-2021 09:41:29 | Great catch - thanks a lot for fixing it! I think we should do the exact same thing for the speech models (UniSpeech, SEW, UniSpeechSAT) actually. Feel free to open a PR if you want - I'll put it on my ToDo list as well though<|||||>@patrickvonplaten
So `UniSpeechSatFeatureExtractor` shouldn't be used by `UniSpeechSatModel` , and it is supposed to use `Wav2Vec2FeatureExtractor`?
I have seen
```
_PROCESSOR_FOR_DOC = "Wav2Vec2Processor"
_SEQ_CLASS_PROCESSOR_FOR_DOC = "Wav2Vec2FeatureExtractor"
```
and `UniSpeechSatFeatureExtractor` is not visible to `transformers`. But I don't have the full context.<|||||>> @patrickvonplaten
>
> So `UniSpeechSatFeatureExtractor` shouldn't be used by `UniSpeechSatModel` , and it is supposed to use `Wav2Vec2FeatureExtractor`?
>
> I have seen
>
> ```
> _PROCESSOR_FOR_DOC = "Wav2Vec2Processor"
> _SEQ_CLASS_PROCESSOR_FOR_DOC = "Wav2Vec2FeatureExtractor"
> ```
>
> and `UniSpeechSatFeatureExtractor` is not visible to `transformers`. But I don't have the full context.
Yeah there is no `UniSpeechFeatureExtractor` since it's the same as `Wav2Vec2FeatureExtractor` so we should instead just use `Wav2Vec2FeatureExtractor`. Happy to resolve this is another PR though!<|||||>@ydshieh - if ok for you, I would like to merge this one now and then if you want we could open a new one for `UniSpeechFeatureExtractor`<|||||>> @ydshieh - if ok for you, I would like to merge this one now and then if you want we could open a new one for `UniSpeechFeatureExtractor`
Sure, ok for me. |
transformers | 14,697 | closed | Fix doc examples: modify config before super().__init__ | # What does this PR do?
Some PT Causal LM models modify config after `super().__init__(config)`. This causes some doc examples failed.
See #14672 for the details.
This PR fixes this issue: It does the same as what has been done in
```
class ProphetNetForCausalLM(ProphetNetPreTrainedModel):
def __init__(self, config):
# set config for CLM
config = copy.deepcopy(config)
config.is_decoder = True
config.is_encoder_decoder = False
super().__init__(config)
self.prophetnet = ProphetNetDecoderWrapper(config)
```
Fixes #14672
## Who can review?
@patrickvonplaten
| 12-09-2021 09:19:01 | 12-09-2021 09:19:01 | This makes sense to me - thanks for outlining the problem in the issue! @sgugger @LysandreJik - can you take a quick look as well? |
transformers | 14,696 | closed | Occasional Can not load roberta-base tokenizer | **Environment info**
transformers 4.12.0.dev
**issue**
when using `tokenizer = AutoTokenizer.from_pretrained('roberta-base')`, got the bug as following:
```
OSError: Can't load config for 'roberta-base'. Make sure that:
- 'roberta-base' is a correct model identifier listed on 'https://huggingface.co/models'
(make sure 'roberta-base' is not a path to a local directory with something else, in that case)
- or 'roberta-base' is the correct path to a directory containing a config.json file
```
Sometimes this bug occurs while sometimes it works well
@LysandreJik
| 12-09-2021 08:45:53 | 12-09-2021 08:45:53 | I think this is probably due to an internet connection error, unfortunately<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,695 | closed | Improve documentation of some models | # What does this PR do?
This PR:
- migrates the docs of ImageGPT, TAPAS and TrOCR from rst to Markdown.
- adds links to my demo notebooks for ViT, BEiT, LUKE, Perceiver.
- improves the docstrings of the Perceiver model. | 12-09-2021 08:26:15 | 12-09-2021 08:26:15 | Please ping me when you're ready to merge it @NielsRogge so that I may update the stable docs with this |
transformers | 14,694 | closed | The loss function should ignore tokens whose index is set to - 100 | Loss function in `EncoderDecoderModel` as well as `BartForConditionalGeneration` didn't ignore special tokens:
https://github.com/huggingface/transformers/blob/e9800122a6f6a68aee7dff347c8f4a6d28e345a2/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L529
Does it need to be like the loss function in `T5ForConditionalGeneration`:
https://github.com/huggingface/transformers/blob/e9800122a6f6a68aee7dff347c8f4a6d28e345a2/src/transformers/models/t5/modeling_t5.py#L1644 | 12-09-2021 08:26:02 | 12-09-2021 08:26:02 | No, the ignore_index is set to -100 by default as can be seen in PyTorch' [docs](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html).<|||||>> No, the ignore_index is set to -100 by default as can be seen in PyTorch' [docs](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html).
I got it, thank you very much! |
transformers | 14,693 | closed | Multiple answers for QA | Hi Team,
I would like to know if the current implementation for Question answering tasks on the squad data format considers multiple answers that can be passed in the format. Going through the code, i think it does not however wanted to know it from you. | 12-09-2021 04:11:27 | 12-09-2021 04:11:27 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,692 | closed | LongformerTokenizer Error | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-5.10.68+-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: fill in
- Using distributed or parallel set-up in script?: fill in
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
Longformer
The problem arises when using:
* [ ] the official example scripts: (give details below)
>>> from transformers import LongformerTokenizer, LongformerForSequenceClassification
>>> import torch
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
>>> model = LongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-9-509e5550f58e> in <module>
----> 1 inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
TypeError: 'LongformerTokenizer' object is not callable
| 12-09-2021 02:37:23 | 12-09-2021 02:37:23 | |
transformers | 14,691 | closed | Fix `AttributeError` from `PreTrainedTokenizerFast.decoder` | # What does this PR do?
Calling the `decoder` attribute for the T5 fast tokenizer raises an error because it tries accessing the non-existent `_tokenizer` field of `tokenizers.Tokenizer`. In `PreTrainedTokenizerFast.decoder`, `self._tokenizer._tokenizer.decoder` is executed, which contains a probably unintentional repeat of `_tokenizer`.
Before:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('t5-base')
>>> type(tokenizer)
<class 'transformers.models.t5.tokenization_t5_fast.T5TokenizerFast'>
>>> tokenizer.decoder
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "transformers/src/transformers/tokenization_utils_fast.py", line 183, in decoder
return self._tokenizer._tokenizer.decoder
AttributeError: 'tokenizers.Tokenizer' object has no attribute '_tokenizer'
```
After:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('t5-base')
>>> type(tokenizer)
<class 'transformers.models.t5.tokenization_t5_fast.T5TokenizerFast'>
>>> tokenizer.decoder
<tokenizers.decoders.Metaspace object at 0x190c817b0>
```
I'm not sure how this hasn't been caught before. The line has not been touched since June 2020, and the `decoder` property is presumably being used somewhere. It's hard to tell if it has any usages in the repository because PyCharm finds a large number of unrelated fields named `decoder` when I search.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
tokenizers: @n1t0, @LysandreJik | 12-08-2021 23:00:25 | 12-08-2021 23:00:25 | Pinging @n1t0, @LysandreJik. |
transformers | 14,690 | closed | Make MLuke tokenizer tests slow | # What does this PR do?
To avoid the timeouts in the test suites, this PR makes all mLuke tokenizer tests slow until we figure out a way to speed them up. | 12-08-2021 20:49:22 | 12-08-2021 20:49:22 | cc @NielsRogge <|||||>Hi, I suspect that the slowness is from reading the large `entity_vocab.json` in `MLukeTokenizer.__init__()`, which has 418 MB ([studio-ousia/mluke-base](https://huggingface.co/studio-ousia/mluke-base/tree/main)).
When I tried on my local PC, only reading the entity vocab file in `__init__()` took around 20s.
If this is the problem, we can reduce the file size.
The entity vocabulary has 120M entries, but now we naively represent the same entities from different languages separate entries, like
```
{
“en:Japan”: 13,
“de:Japan”: 13,
'ko:일본’: 13,
“ja:日本”: 13,
...
}
```
We could instead represent the entities with Wikidata QID (e.g., `Q17` for `Japan`), which would significantly reduce the file size. Using QID may look cryptic at first, but now we think this may be the right way to handle entities in multilingual settings.
Do you have any suggestions on this?<|||||>For tests we don't need a real file (apart from the integration test). All other tokenizers are tested on a vocab of 10-15 tokens, so ideally building something that just have the strict minimum to test would be great.<|||||>Sure, I will work on it. |
transformers | 14,689 | closed | Fix doc examples: unexpected keyword argument | # What does this PR do?
Fix `unexpected keyword argument` in doc examples (in PT model files)
## Who can review?
@sgugger | 12-08-2021 20:34:59 | 12-08-2021 20:34:59 | Not sure why `ProphetNetTokenizer` returns `token_type_ids` by default while `ProphetNetEncoder` can't accept it.
```
>>> tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/prophetnet-large-uncased')
>>> model = ProphetNetEncoder.from_pretrained('patrickvonplaten/prophetnet-large-uncased-standalone')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> del inputs["token_type_ids"] # `ProphetNetEncoder` doesn't use `token_type_ids`
```<|||||>That should be fixed in the tokenizer, not the doc example.<|||||>> That should be fixed in the tokenizer, not the doc example.
OK, I will revert it.<|||||>> > That should be fixed in the tokenizer, not the doc example.
>
> OK, I will revert it.
@sgugger
May I change it to
```
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt", return_token_type_ids=False)
```
for now in this PR, and open an issue for `ProphetNetTokenizer`.<|||||>I'm afraid we will forget to re-change the documentation after the fix is done, so it should just stay as is until the tokenizer is fixed.<|||||>Thanks a lot @ydshieh!
@sgugger - feel free to merge if it's good to go from your side. |
transformers | 14,688 | closed | [trainer] support UserDict inputs (torch-nightly) | torch-nightly has changed the behavior of DataLoader. Until now its iterator was returning a `dict` structure, but with pt-11 it now returns our custom `BatchEncoding` dict structure, which broke `_prepare_inputs` in Trainer as it misses the `dict` check.
This PR fixes the problem switching to checking if it's any kind of dict structure via Mappings and removes `**` as dict can be init'ed with any type of dict. (`**dict` breaks with `UserDict`)
Thanks to @sgugger for helping me to fix this one.
@sgugger | 12-08-2021 20:15:39 | 12-08-2021 20:15:39 | |
transformers | 14,687 | closed | Fix doc examples: name '...' is not defined | # What does this PR do?
For some doc examples in PT model files, we get exception: `NameError: name '...' is not defined`
This PR fixes this issue.
In `modeling_visual_bert.py`, there are some places using an assumed-existing function `get_visual_embeddings`.
```
# Assumption: `get_visual_embeddings(image)` gets the visual embeddings of the image in the batch.
visual_embeds = get_visual_embeddings(image).unsqueeze(0)
```
These places won't pass the doc example test - so need to a way to not run against them.
(Or an example implementation for this function should be provided)
## Who can review?
@sgugger | 12-08-2021 20:00:34 | 12-08-2021 20:00:34 | > Thanks for fixing. For the one in visual BERT, it shouldn't be run by doctest, so the `>>>` should be removed.
I removed `>>>` and `...` as suggested.<|||||>Thnaks! |
transformers | 14,686 | closed | Move pyctcdecode | # What does this PR do?
This PR moves the `pyctcdedcode` to the methods of `Wav2Vec2ProcessorWithLM` so that one can always import it. The dependencies seem a bit tricky to all install (and in particular kenlm is a hard requirement but it doesn't seem to always work properly, especially in notebooks) so this makes the object easier to import for everyone. | 12-08-2021 19:16:12 | 12-08-2021 19:16:12 | |
transformers | 14,685 | closed | Fix wrong checkpoint paths in doc examples | # What does this PR do?
In some (PT) model files, there are doc examples with the wrong checkpoint paths. For example,
```
model = BigBirdPegasusForConditionalGeneration.from_pretrained('bigbird-pegasus-large-arxiv')
```
which should be `'google/bigbird-pegasus-large-arxiv'`
This PR fixes this issue.
There is
```
MegatronBertLMHeadModel.from_pretrained('nvidia/megatron-bert-cased-345m', is_decoder=True)
```
but there is no model file at all when I searched `megatron-bert`. I am not sure what to do in this case.
## Who can review?
@sgugger @LysandreJik | 12-08-2021 19:11:29 | 12-08-2021 19:11:29 | > Thanks for fixing those. For Megatron BERT, a manual download is required, see the [doc page](https://huggingface.co/docs/transformers/model_doc/megatron_bert) for more information.
OK, so I should update the doc example by using a manual download, I guess.<|||||>No need for that (as it requires some bash commands), we just shouldn't test this one. |
transformers | 14,684 | closed | Put back open in colab markers | # What does this PR do?
This PR should be merged once everything is ready on moon-landing (cc @mishig25 I will wait for your green light). | 12-08-2021 18:55:29 | 12-08-2021 18:55:29 | Making a note that this PR should be merged after https://github.com/huggingface/moon-landing/pull/1621 gets merged<|||||>@sgugger done on the moon-landing side. Please feel free to merge this PR<|||||>Thanks! |
transformers | 14,683 | closed | Revert open-in-colab and add perceiver | # What does this PR do?
I merged a bit too fast #14665 so this PR reverts the [[open-in-colab]] markers since the corresponding Svelte component has not been merged yet.
This PR also adds Perceiver to the table of content. (cc @NielsRogge ) | 12-08-2021 18:52:24 | 12-08-2021 18:52:24 | LGTM! |
transformers | 14,682 | closed | add str hub token to repository when provided else fallback to default | # What does this PR do?
This PR fixes the Repository creation when using `PushToHubCallback`. Currently, the Repository is created assuming the `HF_TOKEN` is available on the machine, but that's not the case for every scenario.
These changes check if the `hub_token` parameter is provided if yes then it is passed into the `Repository` if not the default value `True` is used.
| 12-08-2021 18:51:25 | 12-08-2021 18:51:25 | should we/can we add in the upcoming release to have it "fixed" for now and then improve on it? @Rocketknight1 @LysandreJik ? <|||||>Yes, agreed that it should be in the PR; pinging @Rocketknight1 for a quick review and will merge afterwards.<|||||>Also I'll be doing significant updates to the PushToHubCallback in the model card PR that's due Soon™, so I'll play around with it a bit more when I'm testing that. |
transformers | 14,681 | closed | Fixes in init | # What does this PR do?
Small init fixes for the `Wav2Vec2ProcessorWithLm` | 12-08-2021 18:11:29 | 12-08-2021 18:11:29 | |
transformers | 14,680 | closed | Improvements to Comet Integration | # What does this PR do?
I noticed that a lot of the `TrainingArguments` parameters were not being logged to [Comet](https://www.comet.ml/site/) with the latest version of `transformers`. The issue seems to be in this line of the integration
https://github.com/huggingface/transformers/blob/3772af49ceba348f2c9c5bbbb7f7c12e35d2a6eb/src/transformers/integrations.py#L614
The `args` variable containing the `TrainingArguments` parameters that the `CometCallback` uses is being overwritten with a dictionary.
This PR:
- fixes the issue with the parameter logging.
- adds the option of logging training assets (tf event logs, checkpoints etc) to Comet at the end of a run
- Explicitly ends an Experiment at end of training (useful when running inside a Notebook)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Regarding Testing
This affects the data being logged to Comet, so it has been tested on our side to ensure that the appropriate functionality is restored. I did not see any guidelines for testing integrations of this nature, but I am happy to include them as needed.
## Who can review?
Tagging @sgugger since this is an integration related to `TrainerCallback` that affects the `Trainer` and `TrainerArguments` object.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-08-2021 17:54:13 | 12-08-2021 17:54:13 | |
transformers | 14,679 | closed | Revert "Added support for other features for already supported models" | Reverts huggingface/transformers#14358 (to hold off from the next release)
cc @LysandreJik @sgugger @michaelbenayoun | 12-08-2021 17:41:40 | 12-08-2021 17:41:40 | |
transformers | 14,678 | closed | Fix doc examples: 'CausalLMOutput...' object has no attribute 'last_hidden_state' | # What does this PR do?
In some doc examples, the line
```
last_hidden_states = outputs.last_hidden_state
```
will fail when the return value if of type `CausalLMOutput...` (e.g. `CausalLMOutputWithCrossAttentions`).
This PR fixes these lines by using
```
logits = outputs.logits
```
## Who can review
@patrickvonplaten | 12-08-2021 17:07:04 | 12-08-2021 17:07:04 | Awesome! |
transformers | 14,677 | closed | [Feature request] Doc example copy button - option to remove input prompts and outputs | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Documentation page: Make example copy buttons remove input prompts (`>>>` and `...`) before copying, as well as skip lines that don’t start with prompts (in case they are output lines). Or provide the option to toggle this functionality.
## Motivation
When I copy doc examples, the input prompts & outputs lines are also copied. It would be very convenient to remove these automatically. I have seen some websites that can do this, or even toggle the modes for copying or not copying these prompts.
You can find an example here:
[Sphinx-copybutton](https://sphinx-copybutton.readthedocs.io/en/latest/#)
## Your contribution
It's not clear to me how to contribute to this, but I will be happy to do it if it's possible and welcomed.
| 12-08-2021 14:21:12 | 12-08-2021 14:21:12 | This is a bug that will be fixed soon, see https://github.com/huggingface/doc-builder/issues/41. cc @mishig25. Thanks for reporting.<|||||>Thanks, @NielsRogge . I didn't follow that repository. I am closing this issue. |
transformers | 14,676 | closed | Fix doc builder | Add ken-lm to the doc builder. | 12-08-2021 14:14:19 | 12-08-2021 14:14:19 | |
transformers | 14,675 | closed | [AutoProcessor] Add Wav2Vec2WithLM & small fix | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Make AutoProcessor work correctly with local files and small fix
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-08-2021 12:57:11 | 12-08-2021 12:57:11 | Test failure is unrelated<|||||>Failures are unrelated - merging |
transformers | 14,674 | closed | Loading t5 from config file | Why cannot load T5 from a config file?
Python: 3.9.5
Transformer: 4.12.5
OS: ubuntu 18.04
CPU
```
from transformers import (
AutoConfig, T5Model, T5ForConditionalGeneration, T5Config,
)
config = AutoConfig.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained(config)
```
Here is the link to reproduce the issue in colab:
https://colab.research.google.com/drive/17tKE50_CGH12I-McVrob7JfFkveOH6Mc?usp=sharing
| 12-08-2021 12:22:52 | 12-08-2021 12:22:52 | This is due to the fact that the "architectures" attribute of the [config](https://huggingface.co/t5-small/blob/main/config.json#L3) of the "t5-small" checkpoint is set to `T5WithLMHeadModel`, however it was renamed to `T5ForConditionalGeneration`.
I believe that should be updated (as well as the other T5 checkpoints). Correct me if I'm wrong @patrickvonplaten.<|||||>@hadifar if you want to load the model from scratch with a specific config you can do:
```python
config = AutoConfig.from_pretrained('t5-small')
model = T5ForConditionalGeneration(config)
```
if you want to load the pretrained weights with a specific config (that matches the weights) you should do:
```python
config = AutoConfig.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small', config)
``` |
transformers | 14,673 | closed | Output sequences are different between SummarizationPipeline and model.generate | ## Environment info
transformers version: 4.12.5
Platform: Linux
Python version: 3.8.5
PyTorch version (GPU?): 1.10
Tensorflow version (GPU?): N/A
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): Bart (ConditionalGeneration)
The problem arises when using:
- [ ] the official example scripts: (give details below)
- [x] my own modified scripts: (give details below)
The tasks I am working on is:
- [ ] an official GLUE/SQUaD task: (give the name)
- [x] my own task or dataset: (give details below)
When generating summaries using [`sshleifer/distilbart-cnn-12-6`](https://huggingface.co/sshleifer/distilbart-cnn-12-6), I encountered the following phenomenon. `SummarizationPipeline` and `model.generate` outputs are almost different even though the same parameters.
Are there any solutions or tips to generate same summaries?
## To reproduce
sample code is below.
```python
from transformers import (
BartTokenizer,
BartForConditionalGeneration,
SummarizationPipeline,
)
# sample document from https://huggingface.co/sshleifer/distilbart-cnn-12-6
doc = "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."
# load model & tokenizer
tokenizer = BartTokenizer.from_pretrained("sshleifer/distilbart-cnn-12-6")
model = BartForConditionalGeneration.from_pretrained("sshleifer/distilbart-cnn-12-6")
input_ids = tokenizer.encode(doc, return_tensors="pt").to("cuda:0")
# pipeline
pipe = SummarizationPipeline(model=model, tokenizer=tokenizer, device=0)
# generation params
params = model.config.task_specific_params["summarization"]
print(f"params: {params}")
# several num_beams
for num_beams in [1, 5, 10]:
params["num_beams"] = num_beams
print(f"num_beams: {num_beams}")
pipeline_output = pipe(doc, **params, clean_up_tokenization_spaces=True)[0][
"summary_text"
]
output = model.generate(input_ids, **params)
generate_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
print(f"SummarizationPipeline:\n{pipeline_output}")
print(f"model.generate:\n{generate_output}")
print(
f"SummarizationPipeline == model.generate: {pipeline_output == generate_output}\n"
)
print("=" * 50)
params["num_beams"] = 5
params["min_length"] = 10
# several max_length
for max_length in [20, 50, 100]:
params["max_length"] = max_length
print(f"max_length: {max_length}")
pipeline_output = pipe(doc, **params, clean_up_tokenization_spaces=True)[0][
"summary_text"
]
output = model.generate(input_ids, **params)
generate_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
print(f"SummarizationPipeline:\n{pipeline_output}")
print(f"model.generate:\n{generate_output}")
print(
f"SummarizationPipeline == model.generate: {pipeline_output == generate_output}\n"
)
```
output is below.
```
params: {'early_stopping': True, 'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'no_repeat_ngram_size': 3, 'num_beams': 4}
num_beams: 1
SummarizationPipeline:
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. It is the tallest structure in Paris and is the second tallest free-standing structure in France. It was the first structure to reach a height of 300 metres.
model.generate:
The Eiffel Tower is 324 metres tall, about the same height as an 81-storey building. It is the tallest structure in Paris and is the second tallest free-standing structure in France. It was the first structure to reach a height of 300 metres.
SummarizationPipeline == model.generate: False
num_beams: 5
SummarizationPipeline:
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. It was the first structure to reach a height of 300 metres. It is now taller than the Chrysler Building by 5.2 metres (17 ft) Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.
model.generate:
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. It was the first structure to reach a height of 300 metres. Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.
SummarizationPipeline == model.generate: False
num_beams: 10
SummarizationPipeline:
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. It is now taller than the Chrysler Building by 5.2 metres (17 ft) Excluding transmitters, it is the second tallest free-standing structure in France after the Millau Viaduct.
model.generate:
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. It is now taller than the Chrysler Building in New York City by 5.2 metres (17 ft) Excluding transmitters, it is the second tallest free-standing structure in France after the Millau Viaduct.
SummarizationPipeline == model.generate: False
==================================================
max_length: 20
SummarizationPipeline:
The tower is 324 metres (1,063 ft) tall, about the same
model.generate:
The Eiffel Tower is 324 metres (1,063 ft) tall,
SummarizationPipeline == model.generate: False
max_length: 50
SummarizationPipeline:
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. It was the first structure to reach a height of 300 metres. It is now taller than the Chrysler Building
model.generate:
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. It was the first structure to reach a height of 300 metres. It is now taller than the Chrysler Building
SummarizationPipeline == model.generate: True
max_length: 100
SummarizationPipeline:
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. It was the first structure to reach a height of 300 metres. It is now taller than the Chrysler Building by 5.2 metres (17 ft) Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France.
model.generate:
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. It was the first structure to reach a height of 300 metres. Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France.
SummarizationPipeline == model.generate: False
```
## Expected behavior
Same summaries are generated regardless of `SummarizationPipeline` or `model.generate`. | 12-08-2021 09:41:24 | 12-08-2021 09:41:24 | Hey @tagucci,
Good question! It took me some time to find the difference :D . The pipeline automatically takes the `model.config.prefix` parameter into account as well. So if you change your script as follows you should get the same results:
```diff
from transformers import (
BartTokenizer,
BartForConditionalGeneration,
SummarizationPipeline,
)
# sample document from https://huggingface.co/sshleifer/distilbart-cnn-12-6
doc = "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."
# load model & tokenizer
tokenizer = BartTokenizer.from_pretrained("sshleifer/distilbart-cnn-12-6")
model = BartForConditionalGeneration.from_pretrained("sshleifer/distilbart-cnn-12-6")
-input_ids = tokenizer.encode(doc, return_tensors="pt").to("cuda:0")
+input_ids = tokenizer.encode(model.config.prefix + doc, return_tensors="pt").to("cuda:0")
# pipeline
pipe = SummarizationPipeline(model=model, tokenizer=tokenizer, device=0)
# generation params
params = model.config.task_specific_params["summarization"]
print(f"params: {params}")
# several num_beams
for num_beams in [1, 5, 10]:
params["num_beams"] = num_beams
print(f"num_beams: {num_beams}")
pipeline_output = pipe(doc, **params, clean_up_tokenization_spaces=True)[0][
"summary_text"
]
output = model.generate(input_ids, **params)
generate_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
print(f"SummarizationPipeline:\n{pipeline_output}")
print(f"model.generate:\n{generate_output}")
print(
f"SummarizationPipeline == model.generate: {pipeline_output == generate_output}\n"
)
print("=" * 50)
params["num_beams"] = 5
params["min_length"] = 10
# several max_length
for max_length in [20, 50, 100]:
params["max_length"] = max_length
print(f"max_length: {max_length}")
pipeline_output = pipe(doc, **params, clean_up_tokenization_spaces=True)[0][
"summary_text"
]
output = model.generate(input_ids, **params)
generate_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
print(f"SummarizationPipeline:\n{pipeline_output}")
print(f"model.generate:\n{generate_output}")
print(
f"SummarizationPipeline == model.generate: {pipeline_output == generate_output}\n"
)
```<|||||>That's the line in question: https://github.com/huggingface/transformers/blob/8395f14de6068012787d83989c3627c3df6a252b/src/transformers/pipelines/text2text_generation.py#L88<|||||>@patrickvonplaten
In my understanding, BART does not use prefix token unlike T5. So I had no idea to append prefix token.
I could generate same outputs both model.generate and pipeline:D
Thanks! |
transformers | 14,672 | closed | PT CausalLM models config issue | ## Environment info
- `transformers` version: 4.13.0.dev0
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.5
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @sgugger
## Information
Several PT causal LM models set `config.is_decoder = True` for a deeply copied `config` after `super().__init__(config)`.
Yet for doc examples in those model files, there are
```
model = XXXForCausalLM.from_pretrained('...', add_cross_attention=False)
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
```
which fail, since `XXXForCausalLM.config.is_decoder` is `False`.
For example, `BartForCausalLM`:
```
class BartForCausalLM(BartPretrainedModel):
def __init__(self, config):
super().__init__(config)
config = copy.deepcopy(config)
config.is_decoder = True
config.is_encoder_decoder = False
self.model = BartDecoderWrapper(config)
```
And this example will fail
```
Example::
>>> from transformers import BartTokenizer, BartForCausalLM
>>> tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
>>> model = BartForCausalLM.from_pretrained('facebook/bart-large', add_cross_attention=False)
>>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
```
## To reproduce
Just run the above example:
```
from transformers import BartTokenizer, BartForCausalLM
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
model = BartForCausalLM.from_pretrained('facebook/bart-large', add_cross_attention=False)
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
Error:
```
Traceback (most recent call last):
File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\check_cross.py", line 5, in <module>
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
AssertionError: <class 'transformers.models.bart.modeling_bart.BartForCausalLM'> has to be configured as a decoder.
```
## Expected behavior
The example should work.
- Either `config.is_decoder` and `config.is_encoder_decoder` should be set before `super().__init__(config)`
- or we should remove `assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."`
I think the first is what is intended to be.
## Other
I can open a PR for this once having feedback. | 12-08-2021 08:55:19 | 12-08-2021 08:55:19 | |
transformers | 14,671 | closed | fix deprecated tf method | tf.matrix_band_part -> tf.linalg.band_part
Fixes #14670 | 12-08-2021 07:31:52 | 12-08-2021 07:31:52 | |
transformers | 14,670 | closed | tf.matrix_band_part have been deprecated | https://github.com/huggingface/transformers/blob/fae0b9faefc371977d139b60202a63dcedbd8ea8/src/transformers/models/xlnet/modeling_tf_xlnet.py#L500
This should be replaced by `tf.linalg.band_part` here?
| 12-08-2021 06:53:37 | 12-08-2021 06:53:37 | |
transformers | 14,669 | closed | [logging] implement warning_advice / TRANSFORMERS_NO_ADVISORY_WARNINGS | This PR implements the idea of advisory logger warnings as discussed at https://github.com/huggingface/transformers/issues/14455
The main ideas is to enable users to turn some logger warnings off when they re-run the same program again and again and either don't care or can't do something about those warnings. Additionally, this is essential for multi-node setups where the same warning gets replicated for each process and with 512 processes it's a lot of identical warnings.
This PR:
- adds a new logger method `logger.warning_advice` which works just like `logger.warning` but which is a no-op if the env var `TRANSFORMERS_NO_ADVISORY_WARNINGS=1` is set
- adds tests
- updates docs
- replaces one warning for now - need to replace more.
TODO:
- decide which warnings should always remain warnings and which can be switched to advisory ones, this can also be a gradual process. We can also invite input from the community.
- Once completed I'd also propose to replicate this to `datasets` as most of the time repeat uses and the warnings are just as non-informative and ignored there as in `transformers`.
Fixes: https://github.com/huggingface/transformers/issues/14455
@sgugger, @LysandreJik, @patrickvonplaten, @patil-suraj | 12-08-2021 00:38:28 | 12-08-2021 00:38:28 | Looks good to me! I'm wondering whether we actually need a new logger function `logger.warning_advice` or whether it might make sense to simply overwrite `logger.warning` since 100% backward compatibility is kept <|||||>> Looks good to me! I'm wondering whether we actually need a new logger function `logger.warning_advice` or whether it might make sense to simply overwrite `logger.warning` since 100% backward compatibility is kept
You're thinking about having a mechanism of turning all warnings off, aren't you?
If so this is already available via logger's set verbosity.
If we are going to change all warnings to advisory warnings then this feature is pointless.
My thinking was that some warnings are a must whereas others are either border-line `info`-level or they are not close to `error` level.
Perhaps we need to have a closer look at the types of warnings we have right now. If there no clear sub-division of which warnings are essential and which aren't, then this feature won't help.<|||||>I'm personally ok with the `warning_advice` which is supposed to be different to `warning` - for me it's good to merge<|||||>Ok for me! |
transformers | 14,668 | closed | Generate: Passing encoder_hidden_states in with num_beams > 1 raises an error | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: both 4.12.5 and 4.13.0.dev0
- Platform: Linux
- Python version: 3.8.12
- PyTorch version (GPU?): 1.10, no GPU
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): GPT2 (LM head)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I have a non LM encoder (a set Transformer, to be precise), where the last dim is fed into GPT2 LM head. During the training, seems to work, as well as when calling generate, apart from the case when more than 1 beam is specified.
This seems to be resolved by stacking the passed hidden states along the batch dim, but I am unsure whether this is a correct fix.
Also if there's a better way of approaching this, I'd be happy to hear.
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import GPT2Config, GPT2LMHeadModel
import numpy as np
import torch
class CustomGPT2LMHeadModel(GPT2LMHeadModel):
def prepare_inputs_for_generation(self, input_ids: torch.LongTensor, past=None, **kwargs):
res = super().prepare_inputs_for_generation(input_ids, past=past, **kwargs)
res["encoder_hidden_states"] = kwargs["encoder_hidden_states"]
return res
config = GPT2Config()
config.add_cross_attention = True
config.n_embd = 120
model = CustomGPT2LMHeadModel(config)
batch_size = 32
n_beams = 3
input_ids = torch.ones((batch_size, 1)).long() * config.bos_token_id
ehs = torch.from_numpy(np.random.normal(size=(batch_size, 16, config.n_embd))).float()
ehs_stacked = torch.cat([ehs] * n_beams, axis=0)
output = model.generate(input_ids=input_ids,
encoder_hidden_states=ehs, # works with ehs_stacked or n_beams=1
num_beams=n_beams)
```
Traceback:
<details>
```pytb
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_151115/3265814751.py in <module>
----> 1 output = model.generate(input_ids=input_ids,
2 encoder_hidden_states=ehs,
3 num_beams=n_beams)
~/.miniconda3/envs/cellrank/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
26 def decorate_context(*args, **kwargs):
27 with self.__class__():
---> 28 return func(*args, **kwargs)
29 return cast(F, decorate_context)
30
~/.miniconda3/envs/nsode/lib/python3.8/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs)
1068 input_ids, expand_size=num_beams, is_encoder_decoder=self.config.is_encoder_decoder, **model_kwargs
1069 )
-> 1070 return self.beam_search(
1071 input_ids,
1072 beam_scorer,
~/.miniconda3/envs/nsode/lib/python3.8/site-packages/transformers/generation_utils.py in beam_search(self, input_ids, beam_scorer, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
1805 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
1806
-> 1807 outputs = self(
1808 **model_inputs,
1809 return_dict=True,
~/.miniconda3/envs/cellrank/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/.miniconda3/envs/nsode/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py in forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1042 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1043
-> 1044 transformer_outputs = self.transformer(
1045 input_ids,
1046 past_key_values=past_key_values,
~/.miniconda3/envs/cellrank/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/.miniconda3/envs/nsode/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py in forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions, output_hidden_states, return_dict)
885 )
886 else:
--> 887 outputs = block(
888 hidden_states,
889 layer_past=layer_past,
~/.miniconda3/envs/cellrank/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/.miniconda3/envs/nsode/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py in forward(self, hidden_states, layer_past, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions)
415 residual = hidden_states
416 hidden_states = self.ln_cross_attn(hidden_states)
--> 417 cross_attn_outputs = self.crossattention(
418 hidden_states,
419 attention_mask=attention_mask,
~/.miniconda3/envs/cellrank/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/.miniconda3/envs/nsode/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py in forward(self, hidden_states, layer_past, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions)
334 attn_output, attn_weights = self._upcast_and_reordered_attn(query, key, value, attention_mask, head_mask)
335 else:
--> 336 attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
337
338 attn_output = self._merge_heads(attn_output, self.num_heads, self.head_dim)
~/.miniconda3/envs/nsode/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py in _attn(self, query, key, value, attention_mask, head_mask)
191
192 def _attn(self, query, key, value, attention_mask=None, head_mask=None):
--> 193 attn_weights = torch.matmul(query, key.transpose(-1, -2))
194
195 if self.scale_attn_weights:
RuntimeError: The size of tensor a (96) must match the size of tensor b (32) at non-singleton dimension 0
```
</details>
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
No error is raised. | 12-08-2021 00:29:29 | 12-08-2021 00:29:29 | Hey @michalk8,
In general, we try not to use the issue tracker too much for questions of non-official, costumized models. Could you maybe use the forum: https://discuss.huggingface.co/ instead the next time.
To answer your question, the approach of doing:
```python
ehs_stacked = torch.cat([ehs] * n_beams, axis=0)
```
is definitely in the right direction! The idea is that if the user passes `encoder_hidden_states` she/he is also responsible for making sure the dimensions are correct. We sadly can't wrap complex use cases such this one into `generate` as it would quickly make the function unreadable (it already is a bit tbh).
I think instead of stacking it, you actually have to interleave it between the batch size. The dimension order in generate is:
[ - batch_size
- beam_size
- sequence_length
- ...
]
In your case, you have batch size 32 and you will need to expand the encoder_hidden_states between each batch for the first dimension. You could probably use this code here:
https://github.com/huggingface/transformers/blob/8395f14de6068012787d83989c3627c3df6a252b/src/transformers/generation_utils.py#L479<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, I want to ask is there any other way to pass `encoder_hidden_states` parameter when using `generate()` function, instead to create a new class inheriting `GPT2LMHeadModel` ?
actually , if I directly use `GPT2LMHeadModel.generate( inputs = ..., encoder_hidden_states = ...)` , `encoder_hidden_states ` does not passed . |
transformers | 14,667 | closed | Wrong link in the documentation (ConvBERT, BART, Fnet) | In the documentation on transform-based models [0], the link that is supposed to point to the **ConvBERT** paper [1, 2] points instead to the **DistilBERT** paper [3].
[0] https://huggingface.co/docs/transformers
[1] https://huggingface.co/docs/transformers/model_summary#convbert
[2] https://arxiv.org/pdf/2008.02496.pdf
[3] https://arxiv.org/pdf/1910.01108.pdf | 12-07-2021 22:31:43 | 12-07-2021 22:31:43 | Nice catch! Would you like to submit a PR to amend this?<|||||>Sure ! As soon as I finish reading the documentation, to avoid making two PRs in case I find another error.<|||||>Going through the documentation I found an error and a warning. I'll make a PR about it. I just want to note that I didn't do a documentation review, I stumbled upon these errors by coincidence: my remarks should therefore not be considered an absolute documentation review.
# Fnet [1]
At the level of model documentation, the label corresponding to FNet is rather FlauBERT (see screenshot below)

# BART [2]
At the level of the model documentation, the link that points to the BART authors' code points to a github branch that has been modified [3, 4].
[1] https://huggingface.co/docs/transformers/model_doc/fnet
[2] https://huggingface.co/docs/transformers/model_doc/bart#overview
[3] https://github.com/pytorch/fairseq/tree/master/examples/bart
[4] https://github.com/pytorch/fairseq/tree/main/examples/bart<|||||>@mishig25 do you know where this could come from?<|||||>@LysandreJik https://github.com/huggingface/transformers/pull/14704 |
transformers | 14,666 | closed | Code parrot minor fixes/niceties | # What does this PR do?
Hi there, really excited by HF expanding into Deep Learning for Software Engineering! This PR just adds a few fixes/niceties I found while testing out the scripts. Specifically:
1. Move the training code to a main function to allow using TPUs (TPU testing still needs to be done, but GPU training still works) with accelerate since it expects some entry function
2. Add some additional flags to human eval script for specifying which device to run the `text-generation` pipeline on and to specify how many tasks to evaluate against as it was previously hard-coded to 4
3. Update documentation to have the correct batch size flag for the validation set (was `eval_batch_size`, but should be `valid_batch_size`), add command for human eval script to use flag `--HF_ALLOW_CODE_EVAL="1"` since the program will throw an exception after generating all the samples, and add a note to make sure to install git-lfs for initializing/training the model.
4. Fix one of the requirements (`huggingface-hub`) to be `0.1.0` so that it does not conflict with the other libraries.
Since I couldn't get TPU training to work even with moving everything into the main function, I can separate out that commit if y'all would prefer into a separate WIP PR while I work on it. Just lemme know :nerd_face:!
You can verify my changes with this colab! https://colab.research.google.com/drive/1Tn0wsqqNbAEpxDn2AFs13Ssqwb0r3o-x?usp=sharing
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- research_projects/codeparrot: @lvwerra | 12-07-2021 21:18:57 | 12-07-2021 21:18:57 | Alrighty @lvwerra, I think this PR should be (:crossed_fingers:) good to go :rocket:! Lemme know if anything else needs to be done! |
transformers | 14,665 | closed | Convert tutorials | # What does this PR do?
Fresh version of #14655 | 12-07-2021 21:06:32 | 12-07-2021 21:06:32 | |
transformers | 14,664 | closed | Small fix for GPT2OnnxConfig | # What does this PR do?
This PR fix a small bug of `GPT2OnnxConfig()`.
Since `GPT2OnnxConfig.input` currently lacks a `dynamic_axes` for `input sequence length`, we cannot feed an input of any length into exported `GPT2 Onnx model` when doing inference. (I used [this guide](https://huggingface.co/docs/transformers/serialization) to export Onnx model.)
I checked other supported models' `OnnxConfig`, they seem doesn't have this issues.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten, @LysandreJik
| 12-07-2021 20:27:19 | 12-07-2021 20:27:19 | cc @mfuntowicz @lewtun @michaelbenayoun <|||||>Oh, great to know about it! Never mind, I am going to close this PR.
Hope to see the new update soon and thank you all very much for your great contribution to the community! |
transformers | 14,663 | closed | [trainer] conditional ctx managers into one wrapper | This PR simplifies the code by taking 3 conditionals that are repeated in 3 places in trainer.py and makes use of a wrapper method.
Let me know if you prefer I make this helper method private and add a leading `_`.
If this looks good, we could repeat this in the other places where we have similar situations. While this makes the code simpler it doesn't necessarily make it more easy to understand, since it makes some parts of the logic somewhat hidden.
@sgugger
| 12-07-2021 20:13:35 | 12-07-2021 20:13:35 | |
transformers | 14,662 | closed | Add a job to test doc building (for realsies this time) | # What does this PR do?
This PR takes over from #14645 to do it properly this time, meaning:
- we don't rely on secrets anymore which doesn't work for pull-request
- we don't need the secrets as we are not pushing anywhere
- we actually checkout the content after the PR merge, and not just master 🤦
| 12-07-2021 18:12:45 | 12-07-2021 18:12:45 | Just rebased on `master` and force-pushed.<|||||>Checking that it fails as it's missing the `kenlm` dependency, which I will add right after I confirm it fails.<|||||>Yes the object and its docstings exist whether or not `kenlm` is installed. |
transformers | 14,661 | closed | fix: verify jsonlines file in run_translation (#14660) | # What does this PR do?
Fixes #14660
The documentation and the code everywhere mention jsonlines file except in the actual code where the verification was for json file instead.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
| 12-07-2021 16:27:33 | 12-07-2021 16:27:33 | Thanks for your PR! You should accept both extensions as there are many jsonline files that actually have the .json extension.<|||||>Thanks @sgugger , it is fixed now |
transformers | 14,660 | closed | Do we use JSON lines or JSON only for run_translation.py in PyTorch? | ### Who can help
Models:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
The run_translation.py file requires that we pass `train_file` and `validation_file` as **json** file only, see:
https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/run_translation.py#L219
However, the [README](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation) states that
> The task of translation supports only custom JSONLINES files, with each line being a dictionary with a key "translation" and its value another dictionary whose keys is the language pair
Also in the example just above this line, it references jsonlines:
```bash
python examples/pytorch/translation/run_translation.py \
--model_name_or_path t5-small \
...
--train_file path_to_jsonlines_file \
--validation_file path_to_jsonlines_file \
...
```
Similarly, from the argument description:
https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/run_translation.py#L115
I have made a merge request to address, what I believe, is an issue. Please correct me if I'm wrong. Thanks. | 12-07-2021 16:24:02 | 12-07-2021 16:24:02 | |
transformers | 14,659 | closed | Add Nystromformer | # What does this PR do?
This PR adds the Nystromformer transformer model to the repository.
Paper: [https://arxiv.org/abs/2102.03902](https://arxiv.org/abs/2102.03902)
Code: [https://github.com/mlpen/Nystromformer](https://github.com/mlpen/Nystromformer)
Checkpoint: [Nystromformer sequence length 512](https://www.dropbox.com/s/8uv4f6q52oaqwkh/Nystromformer.model?dl=0)
## Who can review?
@NielsRogge
| 12-07-2021 16:23:04 | 12-07-2021 16:23:04 | Hey @novice03 , really cool PR, can't wait to fine-tune models with it :)
I have some questions about the tokenization part: it seems that the ALBERT tokenizer (and an albert spm model) is used and uploaded to the model hub, so I think in `config.json` the entry `"tokenizer_class": "AlbertTokenizer"` should be added so that `AutoTokenizer` is working.
But I've looked at the original pre-processing and [pre-training code](https://github.com/mlpen/Nystromformer/blob/main/code/run_pretrain.py#L42) and it seems that the RoBERTa tokenizer is used for pre-training a Nyströmformer model, why is an albert spm model used here :thinking:
<|||||>Hello, thanks for taking a look at this PR. I've added the tokenizer entry to the config.json file. It is true that the code seems to use RoBERTa tokenizer. However, the model checkpoint released is of a model with `vocab_size=30,000`. Furthermore, they've also used albert tokenizer in their [code](https://github.com/mlpen/Nystromformer/blob/6539b895fa5f798ea0509d19f336d4be787b5708/reorganized_code/BERT/dataset.py#L25). I've tried using `BertTokenizer` and `RobertaTokenizer`, but they give errors or incorrect results. The checkpoint released by the author only works with this specific albert tokenizer configuration.
<|||||>The test_masked_lm_end_to_end fails since AutoTokenizer doesn't encode the mask token correctly. The test passed when I use AlbertTokenizer instead. The other 2 slow tests pass.<|||||>Very cool to have this new model @novice03 :hugs: .
Let me share with you a little information about the tokenizers of `uw-madison/nystromformer-512` on this PR.
@NielsRogge showed me that there is currently an inconsistency in the treatment of special tokens with the slow and fast versions of `uw-madison/nystromformer-512` tokenizers.
I think that to generate into the `slow_and_fast` folder the files for the fast tokenizer, we should do:
```python
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained("uw-madison/nystromformer-512")
tokenizer.save_pretrained("slow_tokenizer")
fast_tokenizer = AlbertTokenizerFast.from_pretrained(
"slow_tokenizer",
bos_token="[CLS]",
eos_token="[SEP]",
unk_token="<unk>",
sep_token="[SEP]",
pad_token="<pad>",
cls_token="[CLS]",
mask_token="[MASK]"
)
fast_tokenizer.save_pretrained("slow_and_fast")
```
I hope this is helpful!
|
transformers | 14,658 | closed | Fixing Dataset for TQA + token-classification. | # What does this PR do?
Fixes #14301
Fixes TQA tests to actually decouple TF and PT (some TF tests need PT, but not the other way around, if we have all requirements, basically no test ever get run by automated tests.)
Enable Dataset with TQA too.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 12-07-2021 13:24:01 | 12-07-2021 13:24:01 | |
transformers | 14,657 | closed | [WIP] Check doc examples | # What does this PR do?
Check doc examples
| 12-07-2021 07:57:50 | 12-07-2021 07:57:50 | |
transformers | 14,656 | closed | [t5/t0/mt5 models] faster/leaner custom layer norm | This PR dynamically replaces `T5LayerNorm` with `apex.normalization.FusedRMSNorm` when the latter is available (it was just [merged into `apex@master`](https://github.com/NVIDIA/apex/pull/1274)). This is similar to how we used to replace the slower at that time `torch.nn.LayerNorm` with `apex.normalization.FusedLayerNorm` until about a year ago for most models.
Unlike highly optimized `torch.nn.LayerNorm` the current "manual" implementation of `T5LayerNorm`, which is known as RMSNorm, is slow under mixed precision mainly due to the explicit up- and down-casting to meet the requirements of fp32 accumulate.
I first tried to rewrite the code to use the optimized pytorch functions but I didn't succeed to make things faster and requested the NVIDIA team to create `apex.normalization.FusedRMSNorm` fused kernel for us, which they kindly did. Huge thanks to @eqy and his team!
Note: For now just 1 model class was modified and once the reviewers are happy, I will replicate this to the rest of the `model_type=t5` models.
### Background info:
- Root MeanSquare Layer Normalization paper: https://arxiv.org/abs/1910.07467
- Here is a good reference for RMSNorm https://github.com/bzhangGo/rmsnorm/blob/master/rmsnorm_torch.py
### Benchmarks
Here are the benchmarks of before and after for 3 different sizes of t5 - it's a nice 6-11% speedup in the ensemble - the larger the model the smaller the speed up.
I didn't compare `T5LayerNorm` vs `apex.normalization.FusedRMSNorm` directly as it's quite obvious that the latter is much faster to have such an impact on the throughput of the ensemble.
(RTX-3090, apex@master, pt-1.10.2, transformers@master as of 2022-02-04)
```
## t5-small
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=src python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 32 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 20000 --dataloader_num_workers 2 --bf16
# w/o apex.normalization.FusedRMSNorm
train_loss = 2.52
train_samples = 20000
train_samples_per_second = 420.204
# w/ apex.normalization.FusedRMSNorm
train_loss = 2.5205
train_samples = 20000
train_samples_per_second = 470.624
Result: 11% speed up!
## t5-base
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=src python examples/pytorch/translation/run_translation.py \
--model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 16 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 10000 --dataloader_num_workers 2 \
--bf16
# w/o apex.normalization.FusedRMSNorm
train_loss = 2.2083
train_samples = 10000
train_samples_per_second = 138.157
# w/ apex.normalization.FusedRMSNorm
train_loss = 2.2086
train_samples = 10000
train_samples_per_second = 149.883
Result: 8% speed up!
## t5-large
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=src python examples/pytorch/translation/run_translation.py \
--model_name_or_path t5-large \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 8 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 5000 --dataloader_num_workers 2 --bf16
# w/o apex.normalization.FusedRMSNorm
train_loss = 2.0407
train_samples = 5000
train_samples_per_second = 35.374
# w/ apex.normalization.FusedRMSNorm
train_loss = 2.0409
train_samples = 5000
train_samples_per_second = 37.392
Result: 6% speed up
```
### Questions:
- docs - how should we document that installing apex should speed up any t5-based model
- the loss is ~2e-4 worse than the original for these short runs - is there a reason to be concerned?
@sgugger, @LysandreJik, @patil-suraj, @patrickvonplaten | 12-07-2021 03:26:16 | 12-07-2021 03:26:16 | Thanks for diving into it! Alongside Sylvain's comments, this LGTM, <|||||>@patil-suraj, thank you for addressing my questions.
I suppose we then should replicate the doc snippet to all t5-based models, right? I'm thinking of adding a new Performance section - not sure if it's better to have it before the classes start so that it's visible or towards the end. I have added one - please have a look. (position in the doc and content please)
Also do we want to do that copy-syncing mechanism where we designate one model the source and others the copy, but I think it only works for classes, right? i.e. no way to do the same here. I am asking so that the newly added t5-based models won't miss it.<|||||>PR looks good to me as well - that's a very nice speed-up @stas00 !<|||||>@LysandreJik, could you please have a quick look before I replicate this to many other t5-based models? Thank you!<|||||>Yes, please go ahead! Thank you, @stas00!<|||||>Actually, there is nothing to replicate, I thought there were derivatives/copy-cats but it appears that there are none.
So just going to merge this once CI completes. |
transformers | 14,655 | closed | Convert tutorials | # What does this PR do?
This PR converts the rst files in the documentation that currently have a corresponding notebook to Markdown so I can then work on updating the script that auto-generates the notebooks form the documentation. | 12-07-2021 01:15:01 | 12-07-2021 01:15:01 | You need to follow this format for the ColabDropdown to appear:
https://github.com/huggingface/doc-builder/blame/92bc1f3833fc825bd772455df17505d1f322ee88/build/transformers/master/en/quicktour.mdx#L43-L51
1. Put `ColabDropdown` svelte component after the title
2. And change `options` attribute
Also, for this change ti take effect, we need to merge various PR in a specific order:
1. https://github.com/huggingface/moon-landing/pull/1621
2. https://github.com/huggingface/doc-builder/pull/42
3. https://github.com/huggingface/transformers/pull/14655<|||||>Thanks for the syntax! Will amend this PR but I think we can make it a bit easier on the documentation writer since all the links are automatically generated. I'll add a syntax `[[colab]]` (like we use `[[autodoc]]`) to the `doc-builder` and it will replace it with the component you created.<|||||>@sgugger sounds great to me! could you add the parsing side for `[[colab]]` on doc-builder?<|||||>Yes, I will work on that this morning. Feel free to merge the two PRs in order once they are ready, I want @LysandreJik to review this one anyway.<|||||>Arg, rebase messed up the diff :-( Closing an re-opening. |
transformers | 14,654 | closed | [Beam Search] Correct returned beam scores | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
**🚨 🚨 🚨 BREAKING CHANGE: This PR is a breaking change to the values of the generate output `scores` 🚨 🚨 🚨**
It concerns the following use cases:
```python
num_beams = # > 2 enabling beam search, beam sample or group beam search
generate_outputs = generate(...., num_beams=num_beams, return_dict_in_output=True, output_scores=True)
scores = generate_outputs.scores
```
After multiple issues and discussions, which are best summarized here:
- https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175/15?u=patrickvonplaten
- https://github.com/huggingface/transformers/issues/14612
- https://github.com/huggingface/transformers/issues/14065
- https://github.com/huggingface/transformers/issues/14086
it seems like the best solution here is to do a breaking change in that the output beam scores of generate now correspond better to the transition scores.
The added tests to this PR should explain nicely how one can retrieve all the relevant information from the beam scores in combination with newly added beam_indices.
Very curious to also hear your thoughts on this PR:
- @felix-schneider
- @hacobe
- @qqaatw
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-07-2021 01:06:32 | 12-07-2021 01:06:32 | Other related PR maybe : https://github.com/huggingface/transformers/issues/14921<|||||>Thanks a lot for your feedback here guys and sorry to only pick up the PR again.
@Narsil @felix-schneider,
I see that you guys would like to make it easier for the user to directly get the transition probabilities of the generated tokens.
First, let's clarify a couple of things:
@Narsil, this PR is breaking in that the `scores` output is now different in terms of it's values (the shape stays the same). We have only ever tested for the shape to be correct, but never whether the output values or "correct" (which is a bit weird as it would be similar to testing whether the values of the attention matrices correspond to some hardcoded values. Nevertheless, we verify this now in the test added here.
So this is indeed a big breaking change (it's somewhat equal to changing the output values of the attention matrices when setting `output_attentions=True`), but given that `scores` output has been an experimental feature and is quite complex for generation, I think we can make an exception here, given all the issues above. Also cc @LysandreJik @sgugger @patil-suraj here - no need to understand the change in-detail, but maybe just so that you're aware that this PR is a big breaking change for people doing `model.generate(...., return_dict_in_generate=True, output_scores=True)`.
Second, @felix-schneider, I do understand your point of view in that outputting the raw `scores` doesn't add much value. However, it is important to keep the following in mind:
1. On of the philosophies of `transformers` is the following:
```
Expose the models’ internals as consistently as possible:
We give access, using a single API, to the full hidden-states and attention weights.
Tokenizer and base model’s API are standardized to easily switch between mode
```
(see https://huggingface.co/docs/transformers/philosophy)
This does very well apply here as well. We are simply outputting the raw scores that are used to sample the next token in generation which is in my opinion quite consistent with "expose the models' internals".
2. The library is heavily used by researchers who do whatsoever with the model internals for analysis, complex algorithms etc...in my experience, there are always a lot of use cases that one doesn't take into account when trying to design an API that "makes most sense for the end user" meaning that often one thinks there is only one use case and that only this use case should be implemented.
So far the approach of exposing the model's internals worked well IMO (e.g. many people like the fact that the attention mask is that easily available), but as you pointed out it does have the downside to make it more complex and potentially more difficult (especially for development/production oriented users). So it's always a trade-off when deciding what values should be exposed and how.
3. It's **very very** important that we keep backward compatibility. While this PR already breaks Backward comp in some sense, we should definitely not fully remove the `scores` output - all users that have used `scores` before would then have no way to use `transformers` for their use case anymore -> so this cannot be done.
4. We have to be careful to not make maintenance explode here. The less "bare bone" the output is, *i.e.* the more specific, the more prone it is to have errors and the costlier is the maintenance
As a conclusion, I do like @Narsil idea of adding the functionality of computing the transition probabilites as a function to the output classes (or to the model itself). This way, it's very easy for the user to compute the transition scores (just pass `scores` and `beam_indices` into `model.compute_transition_probabilities(scores, beam_indices)`) this way adding a single line would be enough for the user.
I agree with @felix-schneider that we should not sum up the probs and we should probably also allow the user to pass an `attention_mask` so that the final values are correctly masked.
I will adapt the PR according to this. It would be great if you could then take a second look whether it fits your needs <|||||>I see your point, @patrickvonplaten. My suggestion though would be to have the `compute_transition_probabilities` method live with the object returned by `generate`, rather than the model, as this object already has access to all required values: `scores`, `beam_indices` and `attention_mask`.
Putting it with the model implies the ability for different models to modify this behaviour. I'm not sure that this is actually possible as getting transition scores from model outputs is done by beam search and not customizable by models.<|||||>> I see your point, @patrickvonplaten. My suggestion though would be to have the `compute_transition_probabilities` method live with the object returned by `generate`, rather than the model, as this object already has access to all required values: `scores`, `beam_indices` and `attention_mask`.
>
> Putting it with the model implies the ability for different models to modify this behaviour. I'm not sure that this is actually possible as getting transition scores from model outputs is done by beam search and not customizable by models.
I see your point! We could add the function (also as @Narsil suggested) to the `BeamSearchEncoderDecoderOutput` class as you rightly say it's pretty "generation function"-specific. I'm still leaning against it though because, we've never done this kind of design in `transformers` and I'm not sure it's very intuitive for our users as the workflow would look as follows:
```python
sequence_outputs = model.generate(..., output_scores=True, return_dict_in_generate=True)
transition_scores = sequence_outputs.compute_transition_scores(vocab_size=model.config.vocab_size) # not that it's also a a bit tedious to get the vocab size here (we might though actually retrieve that from the shape of some outputs (not trivial though)
```
I think there are three possible designs we could do here:
- 1. The one shown above
- 2. What's implemented now:
```python
sequence_outputs = model.generate(..., output_scores=True, return_dict_in_generate=True)
transition_scores = model.compute_transition_scores(sequence_outputs.sequences, sequence_outputs.scores, sequence_outputs.beam_indices)
```
- 3. Just compute it when scores should be returned:
```python
sequence_outputs = model.generate(..., output_scores=True, return_dict_in_generate=True)
sequence_outputs.transition_scores
```
While 3 is the nicest for the end-user, it is a **very** specific (it creates the transition scores of only the generated sequence ids). Also, it creates quite some maintenance which I would like to avoid a bit.
To summarize, I think 2 is the best way to go because it's general in a sense that people can create the transition probs of any possible sequence of ids (not just the generated ones).
Keen to hear your final thoughts<|||||>I agree with `2.` It's easier to upgrade later, than downgrade `3.` later (if maintenance becomes hard).<|||||>I like the `3.` more than `2.` if the maintenance cost would not become too heavy as IMO it looks more intuitive to users to get the transition scores that most users are looking for (I think). We could also keep `model.compute_transition_scores` in case users want to calculate their own scores.<|||||>Ok seems like we need some votes here for 2. vs 3. haha.
@patil-suraj @LysandreJik @sgugger what do you think?<|||||>I don't have enough context in terms of what "the burden of maintenance" entails for option 3 there, but it would be the most elegant in my opinion. But as a propriety to only be computed when asked for, not as a regular attribute that would be computed all the time (if that's possible, if not we should fall back to option 2).
I'm also okay with option 2.<|||||>Taking everything into account, I'm going for option 2. now (which is the one implemented here). Given that:
- we can always go from 2. to 3. (but not the other way around because of backward comp),
- 2. is more general than 3. at lower maintenance - I see many use cases of computing the scores of not generated ids, but almost generated ids
- and the fact that this is a very researchy feature in general where some knowledge about `generate` can be expected
2 is the better option here. Let's see in the coming month how much this feature is used. I'll also try to make an updated blog post about `generate` in general where I'll include this feature<|||||>hi @patrickvonplaten,
It seems that your PR is not available yet in transformers =4.18.0 . My first attempt as many people :) is to try to match between the `scores` and the sequnces using softmax until I see your discussion. Is this will cover `T5ForConditionalGeneration` ? I am interested to get the `logit` or `score` for each token in the `sequences` |
transformers | 14,653 | closed | Missing weight parameter when loading from deepspeed stage-2 | Converting deepspeed_zero_stage_2 to fp32 misses some parameters. I was finetuning bart-large using deepspeed_stage_2. I converted to fp32 checkpoints(using zero_to_fp32.py) and got a .bin file.
When I try to load this model using model.load_state_dict(path_to_bin_file) I get the following keys missing error.
"model.embed.tokens.weight" ?
I am using using pytorch-lightning for training. | 12-06-2021 22:57:51 | 12-06-2021 22:57:51 | Maybe @stas00 has encountered this error in the past<|||||>Since you were using PL, you probably want to ask that question on their Issues, since I don't know how they did the Deepspeed integration.
i.e. this question is unrelated to `transformers` |
transformers | 14,652 | closed | [deepspeed] fix --load_best_model_at_end | This PR is making `--load_best_model_at_end` reported in https://github.com/huggingface/transformers/issues/14628 work.
There is a problem with resuming from a checkpoint after deepspeed engine was used https://github.com/microsoft/DeepSpeed/issues/1394 - I made a new Issue there to stress out that this is a problem. https://github.com/microsoft/DeepSpeed/issues/1612
This PR:
- adds a workaround that re-creates the deepspeed engine when resuming from `--load_best_model_at_end`
- fixes the timing when deepspeed checkpoint loading kicks in
- adds a test
- adds a commented out test that requires fixing on the deepspeed side
Additionally:
- I added a saving of the DS checkpoint on model_save - I think that would be useful to some users - it's a very quick operation.
Fixes: https://github.com/huggingface/transformers/issues/14628
@sgugger | 12-06-2021 22:07:19 | 12-06-2021 22:07:19 | |
transformers | 14,651 | closed | Transformer-XL -100 label padding | Transformers version: 4.12.5
model: Transformer-XL / transfo_xl
url: https://github.com/huggingface/transformers/blob/master/src/transformers/models/transfo_xl/modeling_transfo_xl.py#L1011
issue:
In the docstring it is specified that: All labels set to ``-100`` are ignored (masked).
However when passed to the model the error: index -100 is out of bounds for dimension 1 with size {vocab size} is thrown
I assume this is because of the different softmax function used by this model
| 12-06-2021 21:46:44 | 12-06-2021 21:46:44 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,650 | closed | Make `CLIPFeatureExtractor` accept batch of images as `torch.Tensor`. | # 🚀 Feature request
Make `CLIPFeatureExtractor` (or any FeatureExtractor in general) accept batch of images as `torch.Tensor`.
## Motivation
Currently batch of images as `torch.Tensor` are not treated as a batch, it has to be a `List[torch.Tensor]` but it is not the case when using native Pytorch DataLoader. Can we update this line so that it accepts batches as `torch.Tensor`. Maybe we can check if the tensor has 4 dimension then assume it is a batch?
https://github.com/huggingface/transformers/blob/75ae287aecf20a37c232a41e25443a3421a8b5e2/src/transformers/models/clip/feature_extraction_clip.py#L136
@patil-suraj @TobiasNorlund | 12-06-2021 21:20:53 | 12-06-2021 21:20:53 | @fcakyon, why was I mentioned here? The only time I edited this file was to add a missing space in an error message.<|||||>My apologies @aphedges. Saw your name on PR.<|||||>Thanks for the issue!
Since a batch of images is a very common input I think we should support this. Pinging @NielsRogge, @sgugger
@fcakyon Also note that when resizing, the feature extractors convert the tensors to PIL images, so the most efficient way to use these is to pass a list of PIL images.
<|||||>@patil-suraj I was using as:
`PIL Images` > `Pytorch DataLoader` > `batch as torch.Tensor` > `CLIPFeatureExtractor` > `CLIPModel`
But in this workflow `CLIPFeatureExtractor` treats `batch as torch.Tensor` as a single image tensor and as a result `CLIPModel` gives error.
Is the intended usage different?<|||||>We could support batch as tensors but as explained in the above comment it won't be efficient. So I would recommend passing a list of PIL images.<|||||>@patil-suraj Do you still see supporting batched tensors as necessary/beneficial? It seems pretty straightforward (e.g. checking `ndim`), but I thought I'd ask for confirmation before jumping into a draft PR.<|||||>I found a way to input a batched tensor by first converting it into a list of tensors. However, it is still not properly:
I think that supporting batched tensors is necessary. In my case, I want to use a feature extractor (ViTFeatureExtractor) during a forward pass, where I may have my tensor ready to use (with gradients). What also happens is that I can't use the Feature Extractors because they convert tensors first into a PIL image, hence loosing the gradients.
The support should be similar to the one used in Pytorch. For instance in https://pytorch.org/vision/main/models/generated/torchvision.models.vgg16_bn.html#torchvision.models.VGG16_BN_Weights.
Please correct me if something is wrong, and let me know if there are other possibilities.<|||||>Hi, I see that the issue is still up for grabs.
Is it possible that I can take it up?
Pinging @NielsRogge and @patil-suraj <|||||>@amyeroberts do you think we should support this?<|||||>Hi, I fixed the problem by directly using ViTModel for that same task and ignoring the FeatureExtractor.<|||||>A batch of images can now be passed into an image processor after the merging of #21144.
As @patil-suraj points out, it won't be efficient, as the batch is transformed to a list of images but it is now an accepted input. @joanrod the gradients will still be lost unfortunately too. A related issue #21064 has been raised and @ErwannMillon will be contributing a research example with a custom image processor (I imagine with a similar solution to the one you found). |
transformers | 14,649 | closed | Finetuning T5 for 2 tasks with 2 diferent prefixes and different data gives the same result. | Hello,
I am able to finetune T5 on paraphrasing task ... and the results are quite good. So for example when i train modelA on dataA and modelB on DataB, the produce different results, but then i train modelC and prefixA + Data A plus PrefixB + dataB the results with PrefixA and PrefixB are almost the same.
What could be the problem, is this an usual thing, because T5 should be able to handle multiple tasks at the same time.
I can train 2 models but I do not have enough memory for interface.
| 12-06-2021 21:13:26 | 12-06-2021 21:13:26 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>It is too bad that models that is advertised of doing multipl;e tasks is actually not capable of it. |
transformers | 14,648 | closed | LEDTokenizer doesn't pad `global_attention_mask` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.13.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.1
- PyTorch version (GPU?): 1.10.0 (False)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script: False
### Who can help
Models: @patrickvonplaten
## Information
Model I am using (LEDTokenizer, LEDSeq2SeqLM):
The problem arises when using:
[* ] my own modified scripts:
- I'm trying to finetune a LEDSeq2SeqLM model for abstractive summarization of long-speech-transcripts.
- I followed the code [here](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing)
- I modified it to use dynamic padding by first not padding and then by using DataCollatorForSeq2Seq for dynamic padding
```{python}
model_name = "allenai/led-base-16384"
tokenizer = AutoTokenizer.from_pretrained(model_name)
def tokenize_examples(examples):
inputs = ["\n".join(document) for document in examples["document"]]
targets = ["\n".join(document) for document in examples["summary"]]
model_inputs = tokenizer(inputs, max_length=tokenizer.model_max_length, padding=False, truncation=True)
model_inputs["global_attention_mask"] = [np.zeros_like(input).tolist() for input in model_inputs["input_ids"]]
# put global attention on <s> token
for input in model_inputs["global_attention_mask"][:]:
input[0] = 1
model_inputs["global_attention_mask"] = model_inputs["global_attention_mask"]
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(targets, max_length=512, padding=False, truncation=True,)
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
return model_inputs
def preprocess_dataset(dataset):
return (dataset
.map(tokenize_examples, batched=True, batch_size=5, num_proc=2,)
.shuffle())
batch_size=2
data_collator = DataCollatorForSeq2Seq(
tokenizer=tokenizer,
model=model,
max_length=tokenizer.model_max_length,
pad_to_multiple_of=8,
label_pad_token_id = -100,)
training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
evaluation_strategy="steps",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
dataloader_drop_last=True,
group_by_length=True,
# fp16=True,
output_dir="./models/led-16k",
logging_steps=5,
eval_steps=10,
save_steps=10,
save_total_limit=2,
gradient_accumulation_steps=4,
num_train_epochs=1,
)
# instantiate trainer
trainer = Seq2SeqTrainer(
model=model,
tokenizer=tokenizer,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=dataset_dict["train"],
eval_dataset=dataset_dict["valid"],
)
```
## To reproduce
Steps to reproduce the behavior:
1. Try dynamic padding with using the DataCollator as show above.
2. get the following is the error log.
```
ValueError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in convert_to_tensors(self, tensor_type, prepend_batch_axis)
704 if not is_tensor(value):
--> 705 tensor = as_tensor(value)
706
ValueError: expected sequence of length 4096 at dim 1 (got 3157)
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
8 frames
<ipython-input-26-c3d8f1eba49d> in <module>()
1 # start training
2 torch.cuda.empty_cache()
----> 3 trainer.train()
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1288 self.control = self.callback_handler.on_epoch_begin(args, self.state, self.control)
1289
-> 1290 for step, inputs in enumerate(epoch_iterator):
1291
1292 # Skip past any already trained steps if resuming training
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in __next__(self)
519 if self._sampler_iter is None:
520 self._reset()
--> 521 data = self._next_data()
522 self._num_yielded += 1
523 if self._dataset_kind == _DatasetKind.Iterable and \
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _next_data(self)
559 def _next_data(self):
560 index = self._next_index() # may raise StopIteration
--> 561 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
562 if self._pin_memory:
563 data = _utils.pin_memory.pin_memory(data)
/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
50 else:
51 data = self.dataset[possibly_batched_index]
---> 52 return self.collate_fn(data)
/usr/local/lib/python3.7/dist-packages/transformers/data/data_collator.py in __call__(self, features, return_tensors)
564 max_length=self.max_length,
565 pad_to_multiple_of=self.pad_to_multiple_of,
--> 566 return_tensors=return_tensors,
567 )
568
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in pad(self, encoded_inputs, padding, max_length, pad_to_multiple_of, return_attention_mask, return_tensors, verbose)
2794 batch_outputs[key].append(value)
2795
-> 2796 return BatchEncoding(batch_outputs, tensor_type=return_tensors)
2797
2798 def create_token_type_ids_from_sequences(
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in __init__(self, data, encoding, tensor_type, prepend_batch_axis, n_sequences)
208 self._n_sequences = n_sequences
209
--> 210 self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
211
212 @property
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in convert_to_tensors(self, tensor_type, prepend_batch_axis)
720 )
721 raise ValueError(
--> 722 "Unable to create tensor, you should probably activate truncation and/or padding "
723 "with 'padding=True' 'truncation=True' to have batched tensors with the same length."
724 )
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
```
The error was not very helpful since the tokenizer is instantiated with padding and truncation and max_length params.
The problem was with the `global_attention_mask` not being padded in the `DataLoader` [here](https://github.com/huggingface/transformers/blob/75ae287aecf20a37c232a41e25443a3421a8b5e2/src/transformers/data/data_collator.py#L585)
I was able to pinpoint the problem to the padding in the tokenizer. Where model inputs to be padded doesn't include `global_attention_mask`
https://github.com/huggingface/transformers/blob/75ae287aecf20a37c232a41e25443a3421a8b5e2/src/transformers/tokenization_utils_base.py#L3121-L3150
i changed lines L3128-L3131 (https://github.com/huggingface/transformers/blob/75ae287aecf20a37c232a41e25443a3421a8b5e2/src/transformers/tokenization_utils_base.py#L3128-L3131) to the following and everything worked.
```{python}
if self.padding_side == "right":
if return_attention_mask:
encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference
encoded_inputs["global_attention_mask"] = (
encoded_inputs["global_attention_mask"] + [0] * difference
)
if "token_type_ids" in encoded_inputs:
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
My fix was a hack for sure. We could perhaps override the `_pad` method in the [LEDTokenizer](https://github.com/huggingface/transformers/blob/75ae287aecf20a37c232a41e25443a3421a8b5e2/src/transformers/models/led/tokenization_led.py#L39) to work with padding LED's `global_attention_mask`
something like the following.
```{python}
from transformers.file_utils import PaddingStrategy
from typing import Optional, List, Union, Dict
from transformers.tokenization_utils_base import EncodedInput, BatchEncoding
# EncodedInput = List[int]
class LEDTokenizerFixed(LEDTokenizer):
def _pad(
self,
encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
max_length: Optional[int] = None,
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
pad_to_multiple_of: Optional[int] = None,
return_attention_mask: Optional[bool] = None,
) -> dict:
"""
Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
Args:
encoded_inputs: Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`).
max_length: maximum length of the returned list and optionally padding length (see below).
Will truncate by taking into account the special tokens.
padding_strategy: PaddingStrategy to use for padding.
- PaddingStrategy.LONGEST Pad to the longest sequence in the batch
- PaddingStrategy.MAX_LENGTH: Pad to the max length (default)
- PaddingStrategy.DO_NOT_PAD: Do not pad
The tokenizer padding sides are defined in self.padding_side:
- 'left': pads on the left of the sequences
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_attention_mask: (optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
# Load from model defaults
if return_attention_mask is None:
return_attention_mask = "attention_mask" in self.model_input_names
required_input = encoded_inputs[self.model_input_names[0]]
if padding_strategy == PaddingStrategy.LONGEST:
max_length = len(required_input)
if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0):
max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of
needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length
# Initialize attention mask if not present.
if return_attention_mask and "attention_mask" not in encoded_inputs:
encoded_inputs["attention_mask"] = [1] * len(required_input)
if needs_to_be_padded:
difference = max_length - len(required_input)
if self.padding_side == "right":
if return_attention_mask:
encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference
encoded_inputs["global_attention_mask"] = (
encoded_inputs["global_attention_mask"] + [0] * difference
)
if "token_type_ids" in encoded_inputs:
encoded_inputs["token_type_ids"] = (
encoded_inputs["token_type_ids"] + [self.pad_token_type_id] * difference
)
if "special_tokens_mask" in encoded_inputs:
encoded_inputs["special_tokens_mask"] = encoded_inputs["special_tokens_mask"] + [1] * difference
encoded_inputs[self.model_input_names[0]] = required_input + [self.pad_token_id] * difference
elif self.padding_side == "left":
if return_attention_mask:
encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"]
encoded_inputs["global_attention_mask"] = [0] * difference + encoded_inputs["global_attention_mask"]
if "token_type_ids" in encoded_inputs:
encoded_inputs["token_type_ids"] = [self.pad_token_type_id] * difference + encoded_inputs[
"token_type_ids"
]
if "special_tokens_mask" in encoded_inputs:
encoded_inputs["special_tokens_mask"] = [1] * difference + encoded_inputs["special_tokens_mask"]
encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input
else:
raise ValueError("Invalid padding strategy:" + str(self.padding_side))
return encoded_inputs
```
| 12-06-2021 20:32:12 | 12-06-2021 20:32:12 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten : is this something I can contribute towards ?<|||||>Hey @parambharat,
It would be great if you could try to fix the problem by opening a PR! More than happy to take a look :-) <|||||>@ydshieh think you've worked with LED quite a bit recently. Could you take a look here? :-)<|||||>Sure, I will try it 😎<|||||>Hi @parambharat , @patrickvonplaten
I looked this issue, and think @parambharat 's suggestion makes sense, but need to be refined:
In the method `_pad`, just like `attention_mask`, we need to deal with the case `"global_attention_mask" not in encoded_inputs`:
1. either provide a default value
2. or just do nothing, and not include `global_attention_mask` to the outputs
If we go for option 1, I think it is more logical to include `global_attention_mask` in the output of `encode` and other tokenizer method. But I prefer not to override many methods. So I will go for option 2 (i.e. not return `global_attention_mask` if it is not provided by the user).
<|||||>I understand the issue now much better - thanks for clarifying! IMO it's a good idea to overwrite the `_pad` method in the tokenizer and I agree with @ydshieh that option 2.) is simpler makes more sense here! @parambharat would you be interested in opening a PR here or @ydshieh maybe? :-)<|||||>Let's see if @parambharat would like (or have time) to contribute first. Otherwise, I can work on it.<|||||>@parambharat
This issue is finally fixed in #15940. |
transformers | 14,647 | closed | Reproducibility issue with Trainer | Hi,
I am trying to fine-tune a model using PyTorch backend and Trainer class, but when I train 2 times with exactly the same parameters, I still get different results. I have supplied the seed and also used PyTorch-specific commands:
```py
torch.use_deterministic_algorithms(True)
torch.backends.cudnn.benchmark = False
```
However, after all of these failed I went to Trainer's source code and noticed these lines (https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L566)
```
if self.args.world_size <= 1 and _is_torch_generator_available:
generator = torch.Generator()
generator.manual_seed(int(torch.empty((), dtype=torch.int64).random_().item()))
```
Is there any reason to actually sample the seed here? Wouldn't it hurt reproducibility between the runs? I'm not sure if this is a bug or not, which is why I didn't open this issue as a bug.
Also do you have more suggestions on where to look for sources of randomness using PyTorch backend?
Thank you very much in advance!
Best regards,
Dmytro | 12-06-2021 19:34:13 | 12-06-2021 19:34:13 | Hi,
Googling "reproducibility Huggingface trainer" returns this helpful post on our forum: https://discuss.huggingface.co/t/fixing-the-random-seed-in-the-trainer-does-not-produce-the-same-results-across-runs/3442<|||||>Hi Niels!
Thanks for your swift response! I've seen the post, but it seemed people still couldn't make it work judging from the last message in the thread. I've tested setting the seed before instantiating a model now and fixed a couple of other bugs of my own and now I managed to get reproducible results.
However, I still can't grasp why the original Trainer sets the seed to a random value in the code snippet provided in the original post of this thread, since this generator is used to instantiate a sampler, which is passed to a DataLoader, in turn. Won't that hurt reproducibility?
Best,
Dmytro<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,646 | closed | fix flax examples tests | # What does this PR do?
This PR
- Make `tensorboard` optional in flax qa and glue script to remove dependency on `tf`.
- Update the `test_fetcher.py` for flax examples
- Mark all except one flax example test as slow, since these tests are taking a lot of time. | 12-06-2021 18:18:15 | 12-06-2021 18:18:15 | |
transformers | 14,645 | closed | Add a job to test the documentation build | # What does this PR do?
This PR adds a GitHub action to test the documentation build. While we will in the future have something to check the built documentation, it's important to test right now that there is no failure in the build at least, even if we can't see the result. | 12-06-2021 17:56:02 | 12-06-2021 17:56:02 | The caching only gets us one minute back in the setup :-(
Once the dummies have the docstrings of their real counterpart, we can stop installing all dependencies and have better timing though! |
transformers | 14,644 | closed | Fix syntax for class references | # What does this PR do?
This PR fixes the documentation links made of the form :obj:`~transformers.ObjectName`. Those are not detected by `doc-builder` (and can't really be added since we use `:obj:` very often) so I searched and fixed the docstrings instead.
See the [TextClassificationPipeline](https://huggingface.co/docs/transformers/master/en/main_classes/pipelines#transformers.TextClassificationPipeline) in the current doc for an example of the problem:

cc @mishig25 @LysandreJik (merging to fix but will address any comment you have on your return Lysandre). | 12-06-2021 17:44:19 | 12-06-2021 17:44:19 | LGTM, thanks for the ping |
transformers | 14,643 | closed | fix flax example tests | # What does this PR do?
The `conftest.py` was missing from the flax examples directory. so the flax example tests are failing on master.
(cf [here](https://app.circleci.com/pipelines/github/huggingface/transformers/30924/workflows/7c0b3b8b-d664-4dc2-9c7e-69520347f0d5/jobs/317301))
This PR adds the `conftest.py` file in the flax example directory. | 12-06-2021 17:38:01 | 12-06-2021 17:38:01 | |
transformers | 14,642 | closed | Encapsulate all forward passes of integration tests with "with torch.no_grad()" | Each model in the Transformers library has one or more corresponding integration tests, which make sure the implementation in Transformers returns the same exact logits/hidden states as the original implementation on the same input data.
e.g. the integration test for the Vision Transformer (ViT) can be found [here](https://github.com/huggingface/transformers/blob/df085d8ea8b823b85ecfe2e1968b5741983f98f3/tests/test_modeling_vit.py#L336). It verifies the shape of the logits as well as the first values on a cats image.
However, a lot of these integration tests don't leverage `with torch.no_grad()`, which make sure no gradients are computed (which isn't necessary when doing inference).
Hence, it would be great if someone could encapsulate the forward passes of the integration tests with `with torch.no_grad()`. | 12-06-2021 17:03:27 | 12-06-2021 17:03:27 | Hey, I'd love to work on this!<|||||>Sounds good @itsTurner, feel free to open a PR :)<|||||>@itsTurner Are you still working on this issue?. If not then I would love to contribute to this issue. <|||||>Feel free to open per-model PRs so that you may both contribute at the same time! (and so can other members of the community!)<|||||>@LysandreJik I have created this pr. Please let me know if this was the expected behaviour<|||||>Hey @NielsRogge, thanks for opening this issue and making it accessible for everyone!I have a few questions about this issue.
1. Is `with torch.no_grad()` preferable over the `@torch.no_grad()` decorator? Personally I find the latter cleaner, and it removes the need to hunt for forward passes and open a `with` statement each time, especially when a single test case has multiple forward passes.
2. PyTorch 1.9 introduced [`torch.inference_mode`](https://pytorch.org/docs/stable/generated/torch.inference_mode.html), which provides further optimizations on top of `torch.no_grad()`. Would this be preferable, or would it introduce backward compatibility issues with testing? I'm wondering if we can also create a wrapper that decides which context manager to use depending on the user's `torch` version.
Appreciate your input in advance!<|||||>@LysandreJik Do you guys still need help with the tests ? I would love to open up a PR and work on this. Please let me know. <|||||>Found some integration tests not leveraging `with torch.no_grad()`,
I'll take on those and open PR :)<|||||>Hi @jaketae, that's a very good point. I'll cc @LysandreJik and @patrickvonplaten to decide on which option we use best for integration tests. In any case, an annotator is more clean than using a context manager imo.<|||||>No opposition from my side to use `@torch.no_grad()`, I find it clean as well, and this way we ensure that the whole method does not use gradients.<|||||>Ok for me too! <|||||>@NielsRogge @LysandreJik I believe everyone has agreed. Is this issue still open ? Can I work on this ?<|||||>cc'ing @ydshieh here as I'm not sure we can just add `torch.no_grad()`, due to the fact that torch needs to be available for that<|||||>This will fail if torch is not available. Even if I place `require_torch` in difference places and in different order with `torch.no_grad`. We can make a custom `no_grad` decorator if this is OK for everyone.<|||||>Hi @LysandreJik do you still need help with this issue? I'd love to open a PR on the RemBert and Camembert models! |
transformers | 14,641 | closed | doc: mismatch between pooler/d_output | The model outputs a pooler_output whereas the doctype examples were using a pooled_output.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger
| 12-06-2021 15:11:52 | 12-06-2021 15:11:52 | |
transformers | 14,640 | closed | Add mLUKE | # What does this PR do?
Reopen the following PR:
https://github.com/huggingface/transformers/pull/14570
I have reflected the comments in the previous PR (add `# Copied from ...` comments, replace `assert` with `ValueError`).
I have also added few tests to make sure `LukeTokenizer` and `MLukeTokenizer` correctly handle padding with entity inputs.
## Who can review?
@NielsRogge @LysandreJik @sgugger | 12-06-2021 14:41:11 | 12-06-2021 14:41:11 | It looks like there a quite a few failures in the Luke tests. You can see them locally with:
```
pytest tests/test_modeling_luke.py
```
from the root repo.<|||||>Thanks, I’ve fixed that.
I still see some tests failed in CI (`run_tests_tf`), but I guess it is not from the changes in this PR?<|||||>Yes, they're timing out for some reason, it's not a failure due to your PR.
Thanks a lot for adding this new model! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.