repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 9,611 | closed | [bugs]: class DataCollatorForWholeWordMask e["input_ids"] not have the size,change to len() |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-15-2021 05:46:42 | 01-15-2021 05:46:42 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,610 | closed | [DeepSpeed docs] new information | As I bombarded the DeepSpeed with multiple Issues, as the answers are starting to percolate back, so I will gather them in this PR. So I will let it sit for a while collecting updates, unless users will need those answers sooner.
* [x] how to run DeepSpeed with a 1 gpu which is not GPU 0 (`CUDA_VISIBLE_DEVICES` can't be used)
* [x] add a newly published paper to resources
* [x] various small additions/improvements | 01-15-2021 03:48:39 | 01-15-2021 03:48:39 | |
transformers | 9,609 | closed | change masked_bias to -inf | # What does this PR do?
change masked_bias to -inf
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 01-15-2021 03:31:40 | 01-15-2021 03:31:40 | Hi, thank you for opening a PR!
Our goal is to stay as close to the initial implementation as possible. The original implementation by OpenAI uses -1e4, so we will keep it this way.<|||||>I find the initial implementation is `-1e10` in https://github.com/openai/gpt-2/blob/master/src/model.py#L88
```py
w = w*b - tf.cast(1e10, w.dtype)*(1-b)
```
related issue #9594
I am not quite sure, I guess `1e-10` is not compatible with `fp16`, that may be the reason behind huggingface implementation.
|
transformers | 9,608 | closed | Convert ckpt from TFTrainer to huggingface format. | I trained models with ```Trainer(pytorch)``` and ```TfTrainer(tensorflow)```, repectively.
With ```Trainer```, everything is ok. Saved models are directly applicable to huggingface pipeline (eg, AutoModel('model_name')).
But with save models from ```TFTrainer``` (ckpt format), I can not do that with ```AutoModel``` and ```TFAutoModel``` .
I can restart the training, so files do not have a problem.
I guess the problem is ckpt file all contains weight and other parameters related to optimizer.
How can I transform my ckpt file to huggingface-applicable format like ```tf_model.h5``` or convert to pytorch?
@jplu | 01-15-2021 03:14:01 | 01-15-2021 03:14:01 | Hello!
You have to use the `save_model` method of the trainer.<|||||>ok. thanks! |
transformers | 9,607 | closed | [run_ner.py]You need to instantiate RobertaTokenizerFast with add_prefix_space=True to use it with pretokenized inputs | ## Environment info
- `transformers` version: 4.2.0
- Platform: Linux-5.4.0-53-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.12
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
examples/token-classification: @stefan-it
tokenizers: @mfuntowicz
## Information
Model I am using Roberta:
The problem arises when using:
* The official example scripts: `transformers/examples/token-classification/run_ner.py`
The tasks I am working on is:
* an official task: Named Entity Recognition on `CoNLL 2003`
## To reproduce
Steps to reproduce the behavior:
run this command:
`python ./transformers/examples/token-classification/run_ner.py --model_name_or_path roberta-base --dataset_name conll2003 --output_dir ./roberta_base_cased_conll2003 --do_train --do_eval`
I am using the `run_ner.py` of a very recent commit: `126fd281`
```
$ md5sum run_ner.py
cb6401e787266812f791a1e3052465d3 run_ner.py
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I got this error:
```
AssertionError: You need to instantiate RobertaTokenizerFast with add_prefix_space=True to use it with pretokenized inputs.
```
I tested other models, such as `bert-base-cased`, `bert-large-cased`, `xlm-roberta-base`, `xlnet-base-cased`. All of these worked. But `roberta-base` and `roberta-large` have this error.
This is the full output on screen:
```
01/14/2021 20:34:28 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 2distributed training: False, 16-bits training: False
01/14/2021 20:34:28 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=./roberta_base_cased_conll2003, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=EvaluationStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs/Jan14_20-34-28_ubuntu18, logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=./roberta_base_cased_conll2003, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.0, adafactor=False, _n_gpu=2)
Reusing dataset conll2003 (/home/fangli/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/63ba56944e35c1943434322a07ceefd79864672041b7834583709af4a5de4664)
[INFO|configuration_utils.py:445] 2021-01-14 20:34:29,366 >> loading configuration file https://huggingface.co/roberta-base/resolve/main/config.json from cache at /home/fangli/.cache/huggingface/transformers/733bade19e5f0ce98e6531021dd5180994bb2f7b8bd7e80c7968805834ba351e.35205c6cfc956461d8515139f0f8dd5d207a2f336c0c3a83b4bc8dca3518e37b
[INFO|configuration_utils.py:481] 2021-01-14 20:34:29,366 >> Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"finetuning_task": "ner",
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3",
"4": "LABEL_4",
"5": "LABEL_5",
"6": "LABEL_6",
"7": "LABEL_7",
"8": "LABEL_8"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2,
"LABEL_3": 3,
"LABEL_4": 4,
"LABEL_5": 5,
"LABEL_6": 6,
"LABEL_7": 7,
"LABEL_8": 8
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.2.0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|configuration_utils.py:445] 2021-01-14 20:34:29,405 >> loading configuration file https://huggingface.co/roberta-base/resolve/main/config.json from cache at /home/fangli/.cache/huggingface/transformers/733bade19e5f0ce98e6531021dd5180994bb2f7b8bd7e80c7968805834ba351e.35205c6cfc956461d8515139f0f8dd5d207a2f336c0c3a83b4bc8dca3518e37b
[INFO|configuration_utils.py:481] 2021-01-14 20:34:29,405 >> Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.2.0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|tokenization_utils_base.py:1760] 2021-01-14 20:34:29,584 >> loading file https://huggingface.co/roberta-base/resolve/main/vocab.json from cache at /home/fangli/.cache/huggingface/transformers/d3ccdbfeb9aaa747ef20432d4976c32ee3fa69663b379deb253ccfce2bb1fdc5.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab
[INFO|tokenization_utils_base.py:1760] 2021-01-14 20:34:29,585 >> loading file https://huggingface.co/roberta-base/resolve/main/merges.txt from cache at /home/fangli/.cache/huggingface/transformers/cafdecc90fcab17011e12ac813dd574b4b3fea39da6dd817813efa010262ff3f.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
[INFO|tokenization_utils_base.py:1760] 2021-01-14 20:34:29,585 >> loading file https://huggingface.co/roberta-base/resolve/main/tokenizer.json from cache at /home/fangli/.cache/huggingface/transformers/d53fc0fa09b8342651efd4073d75e19617b3e51287c2a535becda5808a8db287.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
[INFO|modeling_utils.py:1027] 2021-01-14 20:34:29,701 >> loading weights file https://huggingface.co/roberta-base/resolve/main/pytorch_model.bin from cache at /home/fangli/.cache/huggingface/transformers/51ba668f7ff34e7cdfa9561e8361747738113878850a7d717dbc69de8683aaad.c7efaa30a0d80b2958b876969faa180e485944a849deee4ad482332de65365a7
[WARNING|modeling_utils.py:1135] 2021-01-14 20:34:32,134 >> Some weights of the model checkpoint at roberta-base were not used when initializing RobertaForTokenClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']
- This IS expected if you are initializing RobertaForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[WARNING|modeling_utils.py:1146] 2021-01-14 20:34:32,134 >> Some weights of RobertaForTokenClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "/home/fangli/github/transformers/examples/token-classification/run_ner.py", line 428, in <module>
main()
File "/home/fangli/github/transformers/examples/token-classification/run_ner.py", line 319, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1240, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1211, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/home/fangli/github/transformers/examples/token-classification/run_ner.py", line 290, in tokenize_and_align_labels
is_split_into_words=True,
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 2329, in __call__
**kwargs,
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 2514, in batch_encode_plus
**kwargs,
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/transformers/models/gpt2/tokenization_gpt2_fast.py", line 155, in _batch_encode_plus
f"You need to instantiate {self.__class__.__name__} with add_prefix_space=True "
AssertionError: You need to instantiate RobertaTokenizerFast with add_prefix_space=True to use it with pretokenized inputs.
```
Thanks for help!
Best,
Li | 01-15-2021 01:55:08 | 01-15-2021 01:55:08 | Hi,
I would like to report the same problem. I see this problem only with RoBERTa base or large and I am also using transformers4.2.2.
Any suggestions or help would be appreciated.
Thanks.<|||||>Hi,
I had the same issue. I solved it by adding add_prefix_space=True to the tokenizer.
Best<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi,
I am have the same issue.
I am loading from json -
`python $SCRATCH/transformers/examples/token-classification/run_ner.py \
--model_name_or_path roberta-base \
--train_file dict_structure/trivia_training.json \
--validation_file dict_structure/trivia_val.json \
--output_dir roberta_base_on_MITMovieNER/ \
--do_train \
--do_eval \
--per_device_train_batch_size 64 \
--per_device_eval_batch_size 20 \
--num_train_epochs 40 \
--overwrite_output_dir \
--evaluation_strategy steps \
--save_steps 1000 \
--eval_steps 500 \
--logging_first_step \`
Sorry, not sure if this is an issue on my end. @stefan-it <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This remains an issue using the official example and official task; it would be great to see this addressed. |
transformers | 9,606 | open | [DeepSpeed] Features to integrate / Optimizations to add / Experiments to do | # 🚀 Feature request
While we have the support for main DeepSpeed features integrated, there are other powerful features that haven't been explored yet and which can provide even more various performance boosts. Some will probably require no changes on our side, while others require changes in the model and/or trainer.
This issue is to track what's possible and the priorities if any.
## Features to integrate
* [ ] [1-bit Adam](https://www.deepspeed.ai/tutorials/onebit-adam/) - Up to 5x less communication volume and up to 2x faster training
* [ ] [Progressive Layer Dropping](https://www.deepspeed.ai/tutorials/progressive_layer_dropping/) - Accelerating Training of Transformer-Based Language Models
* [ ] [DeepSpeed Sparse Attention](https://www.deepspeed.ai/tutorials/sparse-attention/) (Seems to be limited only to NVIDIA V100 )
* [ ] [DeepSpeed Transformer Kernel](https://www.deepspeed.ai/tutorials/transformer_kernel/) [api](https://deepspeed.readthedocs.io/en/latest/kernel.html)
Irrelevant to `transformers`:
* [ ] [DeepSpeed Activation Checkpointing](https://www.deepspeed.ai/docs/config-json/#activation-checkpointing) and extra discussion [here](https://github.com/microsoft/DeepSpeed/issues/665#issuecomment-760512582) - reduce the activation memory during model parallel training by partitioning activation checkpoints across model parallel GPUs, or offloading them to CPU. Since we don't use DS's PP there is no use for it.
## Experiments
Things to experiment with as well:
* [ ] try to profile model performance with DeepSpeed's `FlopsProfiler`
## Optimizations
* [ ] the new zero3 has a special requirement for inference with `--predict_with_generate` that all gpus run all `forward` calls even if they finished completing the predicted sequence early in `generate` - otherwise other gpus will hang waiting for the one that finished early. So currently the workaround is to simply always run till `max_length` in the `while` loop is reached. Which might be inefficient if we have a lot of short sequences, so need to use a synchronization trick to simultaneously quit the `while` loop when all gpus know it's safe to do so. @samyam posted a proof-of-concept for how to do that:
> We could maybe simplify by doing a single all_reduce, where gpus that are done will use a tensor with 0.0 and those that are not done will use 1.0. If the result of all reduce is 0.0 then everyone can stop, otherwise gpus that are done will do fake forward.
```
while sync.item() > 0.0:
p = model.forward(fake_input if am_i_done() else real_input)
sync =torch.tensor(0.0 if am_i_done() else 1.0)
torch.distributed.allreduce(sync)
```
At the moment this needs to be done in 5 places in the various search functions that `generate` may call.
For the full context please see: [this thread](https://github.com/microsoft/DeepSpeed/issues/860#issuecomment-799936583).
-------------------
If anybody would like to work on any of these items please open a dedicated issue so it'd be easier to track and please tag @stas00 to it.
| 01-14-2021 22:35:35 | 01-14-2021 22:35:35 | hi,we noticed Deepspeed transformer kernel is much faster than the original PyTorch version with less memory consumption. I would like to know if you have any future plan to integrate Deepspeed transformer kernel in huggingface.
Thanks!<|||||>Personally my focus at the moment is to enable fitting big models on small hardware, because if we can do such training slowly it's better than not being able to do so.
Next come the speed optimizations.
I added `Deepspeed transformer kernel` to the list above. Thank you for the recommendation.
But if you'd like to do some experimentation and get some good results and submit a PR that would be fantastic. It doesn't have to be perfect, just good enough that it can be seen the speed up improvement the docs are alluding to.<|||||>> Personally my focus at the moment is to enable fitting big models on small hardware, because if we can do such training slowly it's better than not being able to do so.
>
> Next come the speed optimizations.
>
> I added `Deepspeed transformer kernel` to the list above. Thank you for the recommendation.
>
> But if you'd like to do some experimentation and get some good results and submit a PR that would be fantastic. It doesn't have to be perfect, just good enough that it can be seen the speed up improvement the docs are alluding to.
Hi, I did a simple test with the bert-large model,The following are the test results

<|||||>Thank you for sharing the benchmarks, @gongjingcs
That's a nice speed up.
I assume you also tested deepspeed w/o "Deepspeed transformer kernel" as a baseline, to know that it's that feature that gave the speed up and not DeepSpeed's other features.
I encourage you to try to make a PR to integrate this aspect of Deepspeed if you are inspired to do so.<|||||>Hi @stas00,
Thank you for sharing those awesome topics. Are the features still requested/up-to-date ? I would like to follow the point made by @gongjingcs about the Deepspeed Transformer Kernel. <|||||>Hi Simon,
re: up-to-date I'm sure Deepspeed came up with new advancements since this was last updated, if that's what you asking about. And the list in the OP is still outstanding.
So wrt Deepspeed Transformer Kernel. How would you envision us integrating it - i.e. which components of HF transformers do you want? HF models have a lot of features inside the transformer layers, so swapping in a different Transformer block won't work easily. pytorch too has a Transformer block in its arsenal.
In other words I'm seeing to understand how you see those replacements to be used?
Additionally are you after inference or training? For inference we will soon have fast fused kernels via:
https://github.com/huggingface/transformers/pull/14426 and @hyunwoongko has just announced https://github.com/tunib-ai/oslo https://github.com/huggingface/transformers/issues/13690#issuecomment-998492192 which does kernel fusion, though we haven't done any benchmarking yet, but check it out.
Thank you!<|||||>Thank you for your answer @stas00
> re: up-to-date I'm sure Deepspeed came up with new advancements since this was last updated, if that's what you asking about. And the list in the OP is still outstanding.
I was looking at the features you provided in the list and wondered if they were still requested or if anyone was already working on it.
> So wrt Deepspeed Transformer Kernel. How would you envision us integrating it - i.e. which components of HF transformers do you want? HF models have a lot of features inside the transformer layers, so swapping in a different Transformer block won't work easily. pytorch too has a Transformer block in its arsenal.
>
> In other words I'm seeing to understand how you see those replacements to be used?
I just finished to benchmark the Transformer Kernel with the models provide in the DeepSpeedExamples repo. So I don't have a clear plan on how to do this. I was wondering if we could first do an in-place operation to swap out the Transformer layer in the Trainer s.t we can keep the HF components code unchanged while taking advantage of the throughput speed-up and the batch size improvement provided. But I don't know if it will impact other features.
> Additionally are you after inference or training? For inference we will soon have fast fused kernels via:
> #14426 and @hyunwoongko has just announced https://github.com/tunib-ai/oslo #13690 (comment) which does kernel fusion, though we haven't done any benchmarking yet, but check it out.
I have been focusing on training: pre-training and fine-tuning. I haven't look at the deepspeed pre-training yet. OSLO seems really nice, do you think it's still worth looking at the deepspeed Transformer Kernel ?
Thank you <|||||>The problem is that the weight names will be different and any custom features that HF Transformers model expects will not be provided by an external implementation. You can try to import the "normal" model and then monkeypatching the transformers layer to the deepspeed version and see if you get anywhere with it.
And which architecture are you trying to speed up?
I'm yet to try OSLO myself, so can't give any first hand experience, but since it suggests that it can fuse the model, perhaps it can do much better already than the plain pytorch version. I'd make a request at https://github.com/tunib-ai/oslo to support the arch you want and compare the performance. That would probably be the low hanging fruit.
Then you can also try to compile the model into ONNX as described here https://huggingface.co/docs/transformers/serialization and use one of the optimized runtimes. But I don't yet have an experience with that tech yet, hoping to fill the gap in the new year.
<|||||>OSLO only fuses certain parts, just like Megatron-LM. (scale+mask+softmax, bias+gelu, bias+dropout) Therefore, it is slower than the fully fusable kernels like DeepSpeed. I also reviewed DeepSpeed's transformer kernel (not the inference kernel), but I gave up because it is a structure that is difficult to apply to various architectures and cannot do tensor model parallelization.<|||||>On the other hand, DeepSpeed inference is a much more scalable structure. It can also perform tensor model parallelization. However, no backward kernel is provided. It would be nice if @RezaYazdaniAminabadi could provide a backward kernels. (If the backward kernels are available, I will also add them to OSLO)<|||||>Note that there are also lightseq kernels by bytedance which improve DeepSpeed transformer kernels.
https://github.com/bytedance/lightseq The speed of the kernels is similar, but various kernels have been added (embedding, cross-entropy, etc...) and It provides a little more flexible Pybind API.<|||||>Hi, @stas00 , could you please confirm that [DeepSpeed Activation Checkpointing] is working properly?
I was seeing some issues with activation partitioning feature (I need it to reduce activation memory usage)
Also, where are the code changes located for this feature?
Thanks!<|||||>we currently don't use Deepspeed's Activation Checkpointing as it'd be very difficult to integrate into transformers (it'd require massively changing all models). The normal pytorch activation available in most models works just fine. To activate it use this API:
https://huggingface.co/docs/transformers/main_classes/model#transformers.PreTrainedModel.gradient_checkpointing_enable
Deepspeed's Activation Checkpointing however has additional features that pytorch implementation lacks.
|
transformers | 9,605 | closed | New run_seq2seq script | # What does this PR do?
This PR adds a new version of `finetune_trainer` using datasets and completely self-contained (not using anything in the `utils` module or any other python script of the seq2seq folder). I renamed a few args from the old script, mainly:
- `n_train` -> `max_train_samples`
- `n_val` -> `max_val_samples`
- `src_lang` -> `source_lang`
- `tgt_lang` -> `target_lang`
because they were really too short and uninformative. I didn't touch the other ones for backward compatibility (but since the name of the script will change, we can change more if we feel like it). In any case, the way the dataset main arguments is a breaking change compared to the old script.
The following are the features from the old script not implemented yet and will follow either in this PR or in follow-up PRs:
- [x ] Add a small test on some dummy data (Do not merge before this one is ticked)
- [ ] Ability to freeze the encoder / embeddings
- [ ] Pass a test set for predictions | 01-14-2021 22:29:15 | 01-14-2021 22:29:15 | Could we discuss the naming of this script and others?
The description goes:
> Fine-tuning the library models for sequence to sequence.
`run_seq2seq.py` is much less descriptive or intuitive than `finetune_trainer.py `- why not go back to `finetune.py` to replace the script that was PL-based and moved to experiments?
1. If we are cleaning up the naming, just as well we could drop any `run_` prefices that we now have in many `examples/*/run_*.py` - they are all scripts, they all get to **run**. The names are great when they are focused on their purpose and not how they are executed.
2. This script is already inside `seq2seq` - So in `examples/seq2seq/run_seq2seq.py` - how does it help to repeat it twice? I can see where you'd want to uniquely identify each script if they are taken out of context of their `examples/*/` subdir - perhaps this is the intention? perhaps if you open them all in the editor and end up with 10 `finetune.py`? If that's the case, then I can see your point of repeating the "domain" in the name of the script.
If the 2nd item is trying to solve the uniqueness issue, then repetition works just fine, but I strongly recommend replacing `run_` with `finetune_` to at the very least have some mnemonics about what it does.
<|||||>Also, since you will need to update README.md to show users how to run the new script - could we have some of it in this PR? Even just the basic command lines - that would help testing this PR and not needing to figure out the new args?
If possible that is?
Thank you!<|||||>The scripts are all named `run_xxx` precisely for reason 2, the same way we didn't rename `modeling_xxx` files to just `modeling.py` when restructuring the repo. I have no strong objection to changing `run` to `finetune` but it will break lots of links in the documentation and may confuse users, so not sure if it's worth it. I'll let @LysandreJik and @patrickvonplaten chime in on that subject.
I'll add command examples in the README (this PR is not quite ready to be merged yet, there is also the small test to add), first I wanted to grab comments on the actual script before finishing :-) I don't expect it to work fully (though if it's magically the case I'll be happy :-) ) which is why this PR does not delete the old script, so we can make some tests and make sure there is no regression, then progressively fix this new script as needed.<|||||>> The scripts are all named run_xxx precisely for reason 2 [...]
I understand. Thank you for clarifying that part. Easy editing is a strong pro for sure.
Thinking more about it perhaps `finetune` isn't the right name either because it does finetuning plus prediction, so perhaps `run` actually is somewhat of a better choice, as it's less committing to anything ;)
I think what I'm experiencing here is the pain of pattern breaking. First I was using "finetune.py", then I switched to "finetune_trainer.py" and now "run_seq2seq.py" - say, what? :)
> I'll add command examples in the README [...]
I'd like to contribute with the review, but I need context to do such things and there is neither diff nor a way to run it, I'm just not sure how to approach such type of review. So perhaps I will be able to do that at a later stage when I can test the new script, or if you'd like me to look at a particular part of it I'm game too.<|||||>I am merging as a first step. @stas00 I know it's missing examples of use and that there still is the memory regression, I plan to address those in follow-up PRs (also anyone should feel free to suggest improvements to the new scripts).<|||||>It's a good plan, @sgugger! I know you won't forget these. Thank you for considering my concerns.
The only thing I am not sure about is that nobody commented on the new script's naming. |
transformers | 9,604 | closed | Mistake in the "Summary of the tasks" article | Two first points of the translation process are duplicating two first points of summarization process:
[https://huggingface.co/transformers/task_summary.html#translation](https://huggingface.co/transformers/task_summary.html#translation)
> 1. Instantiate a tokenizer and a model from the checkpoint name. Summarization is usually done using an encoder-decoder model, such as Bart or T5.
> 2. Define the article that should be summarized. | 01-14-2021 19:27:01 | 01-14-2021 19:27:01 | Indeed! Do you want to open a PR with a doc fix?<|||||>I would like to, but I couldn't find where that exact doc in the repo is<|||||>Also, a built-in tool for pointing out mistakes in the docs would be usable (like those when you highlight an error and press Ctrl+Enter). I notice a few mistakes and typos from time to time.
I am speaking not about the docs themselves, but about the guides and tutorials which are more community-oriented<|||||>> Indeed! Do you want to open a PR with a doc fix?
@LysandreJik, could you please point out the doc I could fix?<|||||>Here it is: https://github.com/huggingface/transformers/blob/master/docs/source/task_summary.rst<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,603 | closed | TypeError: on_init_end() got an unexpected keyword argument 'model' | ## Environment info
- `transformers` version: 4.0.0
- Python 3.6.10
- Pytorch version: 1.6.0
- pytorch-lightning version: 1.0.3
I'm using aws_neuron_pytorch_p36 virtual environment (on p3 ec2 instance). Regarding pytorch-lightning version, the above version is the highest one I can currently use (higher versions are not supported in my framework)
### Who can help
@LysandreJik
@sgugger
## Information
Model I am using RobertaForSequenceClassification
## To reproduce
The code I'm running:
```
from transformers import RobertaForSequenceClassification
from transformers import Trainer, TrainingArguments
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning import metrics
model = RobertaForSequenceClassification.from_pretrained(
self.training_configuration.hyper_params.pretrained_model_path, num_labels=2)
model_checkpoint = ModelCheckpoint(filepath = model_path,
verbose=1,
save_top_k=1,
save_weights_only=True,
monitor=self.training_configuration.monitor,
mode=self.training_configuration.monitor_mode,
period=1)
early_stopping = EarlyStopping(monitor=self.training_configuration.monitor,
patience=self.training_configuration.patience,
mode=self.training_configuration.monitor_mode)
training_args = TrainingArguments(
output_dir=os.path.dirname(model_path), # output directory
evaluation_strategy="epoch", # Evaluation is done at the end of each epoch.
num_train_epochs=self.training_configuration.epoch, # total number of training epochs
per_device_train_batch_size=self.training_configuration.batch_size, # batch size per device during training
per_device_eval_batch_size=self.training_configuration.batch_size, # batch size for evaluation
warmup_steps=warmup_steps, # number of warmup steps for learning rate scheduler
weight_decay=self.training_configuration.hyper_params.weight_decay, # strength of weight decay
save_total_limit=1, # limit the total amount of checkpoints. Deletes the older checkpoints.
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=training_data, # training dataset
eval_dataset=validation_data, # evaluation dataset
callbacks=[early_stopping, model_checkpoint],
compute_metrics = metrics.classification.Accuracy()
)
trainer.train()
```
The error I'm getting:
```
compute_metrics = metrics.classification.Accuracy()
File "/home/ec2-user/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py", line 305, in __init__
self.control = self.callback_handler.on_init_end(self.args, self.state, self.control)
File "/home/ec2-user/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/transformers/trainer_callback.py", line 331, in on_init_end
return self.call_event("on_init_end", args, state, control)
File "/home/ec2-user/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/transformers/trainer_callback.py", line 382, in call_event
**kwargs,
TypeError: on_init_end() got an unexpected keyword argument 'model'
```
| 01-14-2021 18:55:51 | 01-14-2021 18:55:51 | You are using a pytorch lightning callback instead of a Hugging Face `TrainerCallback`, I'm unsure of why you would think this will work. If you want to use pytorch ligthning, you will have to user their `Trainer` as well.<|||||>thanks @sgugger . The reason I used pytorch lightning callback is because I couldn't find in transformers something that saves only the best checkpoint. Is there something like that? (which I can use instead of using pytorch-lightings- ModelCheckpoint)<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>> You are using a pytorch lightning callback instead of a Hugging Face `TrainerCallback`, I'm unsure of why you would think this will work. If you want to use pytorch ligthning, you will have to user their `Trainer` as well.
Super Helpful |
transformers | 9,602 | closed | TypeError: on_init_end() got an unexpected keyword argument 'model' | ## Environment info
- `transformers` version: 4.0.0
- Python 3.6.10
- Python version: 1.6.0
I'm using aws_neuron_pytorch_p36 virtual environment (on p3 ec2 instance)
### Who can help
@LysandreJik
@sgugger
## Information
Model I am using RobertaForSequenceClassification
## To reproduce
The code I'm running:
```
from transformers import RobertaForSequenceClassification
from transformers import Trainer, TrainingArguments
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning import metrics
model = RobertaForSequenceClassification.from_pretrained(
self.training_configuration.hyper_params.pretrained_model_path, num_labels=2)
model_checkpoint = ModelCheckpoint(filepath = model_path,
verbose=1,
save_top_k=1,
save_weights_only=True,
monitor=self.training_configuration.monitor,
mode=self.training_configuration.monitor_mode,
period=1)
early_stopping = EarlyStopping(monitor=self.training_configuration.monitor,
patience=self.training_configuration.patience,
mode=self.training_configuration.monitor_mode)
training_args = TrainingArguments(
output_dir=os.path.dirname(model_path), # output directory
evaluation_strategy="epoch", # Evaluation is done at the end of each epoch.
num_train_epochs=self.training_configuration.epoch, # total number of training epochs
per_device_train_batch_size=self.training_configuration.batch_size, # batch size per device during training
per_device_eval_batch_size=self.training_configuration.batch_size, # batch size for evaluation
warmup_steps=warmup_steps, # number of warmup steps for learning rate scheduler
weight_decay=self.training_configuration.hyper_params.weight_decay, # strength of weight decay
save_total_limit=1, # limit the total amount of checkpoints. Deletes the older checkpoints.
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=training_data, # training dataset
eval_dataset=validation_data, # evaluation dataset
callbacks=[early_stopping, model_checkpoint],
compute_metrics = metrics.classification.Accuracy()
)
trainer.train()
```
The error I'm getting:
```
compute_metrics = metrics.classification.Accuracy()
File "/home/ec2-user/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py", line 305, in __init__
self.control = self.callback_handler.on_init_end(self.args, self.state, self.control)
File "/home/ec2-user/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/transformers/trainer_callback.py", line 331, in on_init_end
return self.call_event("on_init_end", args, state, control)
File "/home/ec2-user/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/transformers/trainer_callback.py", line 382, in call_event
**kwargs,
TypeError: on_init_end() got an unexpected keyword argument 'model'
```
| 01-14-2021 18:54:52 | 01-14-2021 18:54:52 | |
transformers | 9,601 | closed | [TF Led] Fix wrong decoder attention mask behavior | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes TF LED. I wrongly added some lines to TFLed that automatically change the attention mask. However, this is incorrect behavior and not present in the PT version of the model. Sadly, I discovered this now after the release yesterday. @LysandreJik do you think we can patch this fix to circumvent breaking backward compatibility (but it's a bug IMO anyway).
This also fixes consequencetly the flaky `let_pt_tf_equivalence` test. I ran the test 40 times and it does not fail anymore.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-14-2021 17:32:25 | 01-14-2021 17:32:25 | |
transformers | 9,600 | closed | Speed up RepetitionPenaltyLogitsProcessor (pytorch) | # What does this PR do?
Speeds up RepetitionPenaltyLogitsProcessor using torch gather-scatter functions. Tested on pytorch 1.4.0.
Here's a minimal example to reproduce the slow behavior (and test speed of improvements):
```
import torch
from transformers import RepetitionPenaltyLogitsProcessor, LogitsProcessor
import timeit
import sys
class RepetitionPenaltyLogitsProcessorNew(LogitsProcessor):
r"""
:class:`transformers.LogitsProcessor` enforcing an exponential penalty on repeated sequences.
Args:
repetition_penalty (:obj:`float`):
The parameter for repetition penalty. 1.0 means no penalty. See `this paper
<https://arxiv.org/pdf/1909.05858.pdf>`__ for more details.
"""
def __init__(self, penalty: float):
if not isinstance(penalty, float) or not (penalty > 0):
raise ValueError(f"`penalty` has to be a strictly positive float, but is {penalty}")
self.penalty = penalty
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
score = torch.gather(scores, 1, input_ids) # changed here
# if score < 0 then repetition penalty has to be multiplied to reduce the previous token probability
score = torch.where(score < 0, score * self.penalty, score / self.penalty)
scores.scatter_(1, input_ids, score) # changed here
return scores
input_ids = torch.randint(0, 10000, (256, 256))
scores = torch.randn(256, 10000)
rep_proc = RepetitionPenaltyLogitsProcessor(1.3)
rep_proc_new = RepetitionPenaltyLogitsProcessorNew(1.3)
assert torch.eq(rep_proc(input_ids, scores), rep_proc_new(input_ids, scores)).all().item(), "Should be equal"
print("Python version:", sys.version)
print("Pytorch version:", torch.__version__, "\n")
print(f"Existing rep_proc impl time for 100 iterations on CPU = {timeit.timeit(lambda: rep_proc(input_ids, scores), number=100)}")
print(f"Proposed rep_proc impl time for 100 iterations on CPU = {timeit.timeit(lambda: rep_proc_new(input_ids, scores), number=100)}\n")
if torch.cuda.is_available():
input_ids = input_ids.cuda()
scores = scores.cuda()
print(f"Existing rep_proc impl time for 100 iterations on GPU = {timeit.timeit(lambda: rep_proc(input_ids, scores), number=100)}")
print(f"Proposed rep_proc impl time for 100 iterations on GPU = {timeit.timeit(lambda: rep_proc_new(input_ids, scores), number=100)}")
```
Timings reported:
```
Python version: 3.7.9 (default, Aug 31 2020, 12:42:55)
[GCC 7.3.0]
Pytorch version: 1.4.0
Existing rep_proc impl time for 100 iterations on CPU = 0.0807734300001357
Proposed rep_proc impl time for 100 iterations on CPU = 0.044223628000054305
Existing rep_proc impl time for 100 iterations on GPU = 0.017542457000217837
Proposed rep_proc impl time for 100 iterations on GPU = 0.00720681400025569
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik, @patrickvonplaten
| 01-14-2021 17:15:33 | 01-14-2021 17:15:33 | |
transformers | 9,599 | closed | saving the model during run_mlm | Hi friends-
I am trying to train a Roberta on a large corpus with a server with time limitation.
Is there any way to save the model like every 3000 steps to keep record of the training, and resume it later?
Really need it with the project…Thanks for helping. | 01-14-2021 15:42:41 | 01-14-2021 15:42:41 | Please avoid spamming the repository with multiple duplicate issues.
Also, those questions should go in the [forums](https://discuss.huggingface.co/), the issues are kept for bugs and feature requests only.<|||||>sorry...I created two streams by mistake.
|
transformers | 9,598 | closed | saving the model during run_mlm.py | Hi friends-
I am trying to train a Roberta on a large corpus with a server with time limitation.
Is there any way to save the model like every 3000 steps to keep record of the training, and resume it later?
Really need it with the project…Thanks for helping.
| 01-14-2021 15:42:19 | 01-14-2021 15:42:19 | Please avoid spamming the repository with multiple duplicate issues.
Also, those questions should go in the [forums](https://discuss.huggingface.co/), the issues are kept for bugs and feature requests only. |
transformers | 9,597 | closed | [Model Exporting] How to export a fine tuned model to a single pytorch or tensorflow model file? | Apologies if this is a very basic question, but I just cant seem to find any help or documentation for this online.
I want to use google cloud to generate text from a trained model and the maximum size for a model there is ``500MB``. Currently when finetuning a model the checkpoints folder has the ``model.bin`` file and an ``optimizer.pt`` file. These both are used when loading from pretrained.
Even when using ``distilgpt2`` the combined size of this folder is ~900MB. How do I export this model to its actual documented size of ~400MB. I assume the ``optimizer.pt`` are the weights.
So please can someone help, how do I export a checkpoint to either a tensorflow model to a pytorch model so that I can then use to generate text?
I know the latest release 4.2.0 has the function ``model.save_pretrained()``, but I am using ``transformers==2.8.0`` can a model fine tuned using ``2.8.0`` be exported using the new function?
Thanks | 01-14-2021 15:38:46 | 01-14-2021 15:38:46 | _Note:_ that'd be a better question for the forums at discuss.huggingface.co
The `optimizer.pt` is a snapshot of the optimizer's internal state, for inference you can delete it and only keep your `model.bin` (= the weights)<|||||>Thank you, that is very helpful and for directing me to the forums, I did not even know it existed :)
As regards to my other question, is it possible to do the fine tuning in tensorflow? or even export to a tensorflow model?
or would this discussion be better suited to the forum?
Thanks |
transformers | 9,596 | closed | Update `past_key_values` in GPT-2 | # What does this PR do?
It seems GPT-2 and BartDecoder has a different style of `past_key_values`.
Advised by @patrickvonplaten,
I opened this PR to change GPT-2's cache format from a single tensor to a tuple of 2 tensors.
Once this problem is solved, it is expected that `past_key_values` in GPT-2 will be handled in the same way as in Bart.
Sorry there remain some errors. This PR is [WIP].
I would appreciate your advice on how to update `generation_utils.py`.
Can I modify `_reorder_cache` so that past is replaced from Tuple[torch.Tensor] to Tuple[Tuple[torch.Tensor]],
or should I consider other output variations, output.mem and outputs.past_buckets_states?
Fixes #9391
From patrickvonplaten:
This PR cleans the `_reorder_cache` logic. Now `_reorcher_cache` defaults to an erroneous `NotImplementedError` in `generation_utils.py` forcing the model to implement its corresponding `_rerorder_cache` it the `modeling_...py` file itself. This is cleaner as `_reorder_cache` strongly differs from model to model. In addition, this PR makes sure that `gradient_checkpointing` can only be used if the model is in training mode and makes sure that `use_cache` is disabled when training and `gradient_checkpointing` is enabled to prevent errors.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
GPT2: @LysandreJik, @patrickvonplaten
| 01-14-2021 14:54:53 | 01-14-2021 14:54:53 | CircleCI error messages says as below.
In `run_tests_torch`:
```
-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ============================
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_sample_generate
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_generate
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_generate_dict_outputs_use_cache
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_gradient_checkpointing
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_group_beam_search_generate
==== 5 failed, 4202 passed, 1775 skipped, 744 warnings in 216.47s (0:03:36) ====
Exited with code exit status 1
CircleCI received exit code 1
```
In `run_tests_flax`:
```
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_sample_generate
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_generate
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_generate_dict_outputs_use_cache
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_gradient_checkpointing
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_group_beam_search_generate
==== 5 failed, 4172 passed, 1805 skipped, 751 warnings in 282.27s (0:04:42) ====
Exited with code exit status 1
CircleCI received exit code 1
```
<|||||>Is there a difference between `past_key_value` and `layer_past`? I understand that they both represent the contents of `past_key_values`, the past of each layer, but are they different?
I first thought it might be a difference between the Causal language model and the Seq2Seq language model, but it seems that both `past_key_value` and `layer_past` are used in `modeling_bart.py`.
And as for the contents of `layer_past`, should it be named `past_state`, as the following part of `modeling_bart.py` shows?
https://github.com/huggingface/transformers/blob/236cc365aff2512ef773c6b1786555dab6fb182f/src/transformers/models/bart/modeling_bart.py#L1236-L1244<|||||>I've updated `generation_utils.py`, and it seems `mems` in transfo_xl and xlnet causes a new error.
```
=========================== short test summary info ============================
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_gradient_checkpointing
FAILED tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_beam_sample_generate
FAILED tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_beam_sample_generate_dict_output
FAILED tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_beam_search_generate
FAILED tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_beam_search_generate_dict_output
FAILED tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_group_beam_search_generate
FAILED tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_group_beam_search_generate_dict_output
FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_beam_sample_generate
FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_beam_sample_generate_dict_output
FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_beam_search_generate
FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_beam_search_generate_dict_output
FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_group_beam_search_generate
FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_group_beam_search_generate_dict_output
=== 13 failed, 4194 passed, 1775 skipped, 743 warnings in 205.38s (0:03:25) ====
Exited with code exit status 1
CircleCI received exit code 1
```
https://github.com/huggingface/transformers/blob/236cc365aff2512ef773c6b1786555dab6fb182f/src/transformers/models/xlnet/modeling_xlnet.py#L581-L607
It seems `mems` is something similar to `past_key_values`.
Is there any difference between these two elements with different names?
Also, is it safe to change `mems` from `List[torch.Tensor]` to `Tuple[Tuple[torch.Tensor]]`?<|||||>Hey @forest1988,
You're PR looks very nice! Yes, it is expected that `XLNet` and `TransfoXL` fail actually since they also have been using the "default" `_reorder_cache` function of `modeling_utils.py`. Could you do the following changes to correct this:
1) Copy that old `_reorder_cache` (the one before you did your changes) function that was in `generation_utils.py` to both `modeling_xlnet.py` and `modeling_transfo_xl.py` file so that those have the same function as before?
2) Copy the current `_reorder_cache` function of `generation_utils.py` into `modeling_gpt2.py`?
3) Add a default `_reorder_cache` function to `generation_utils.py` that looks as follows:
```python
def _reorder_cache(self, past, beam_idx):
raise NotImplementedError(...)
```<|||||>I've just updated `torch.utils.checkpoint.checkpoint` check in `modeling_gpt2.py`, referring to `modeling_bart.py`.<|||||>This way it's much cleaner and correct :-) The reason I'm proposing this change is that the `_reorder_cache` function is so different for each model that there should be **no** default function. A default function could confuse people that want to add a new model in a way that they think it works out of the box, but in most cases it just doesn't. A clear error message such as:
```python
def _reorder_cache(self, past, beam_idx):
raise NotImplementedError(f"Make sure that a `_reorder_cache` function is correctly implemented in {self.__class__.__module__} to enable beam search for {self.__class__}")
```
```
<|||||>I think this should solve the problems, let me know if you need more help :-) <|||||>Thank you for your advice! I'll update `_reorder_cache` soon and commit it.<|||||>Hi @patrickvonplaten,
Thanks to your kind advice, I could solve the problem of `_reorder_cache` in `GPT-2`, `XLNet`, `TransfoXL` (, and `CTRL`).
Referring to `modeling_bart.py`, in which `_reorder_cache` is placed in `ConditionalGeneration` Model, I added `_reoder_cache` in `LMHead` Models in each Causal Language Models.
The last one remaining bug is:
```
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_gradient_checkpointing
```
I think I should modify `test_gpt2_gradient_checkpointing` so that it has `use_cache=False`, or reconsider my previous update and re-modify the usage of `checkpoint` in modeling_gpt2.
> I've just updated torch.utils.checkpoint.checkpoint check in modeling_gpt2.py, referring to modeling_bart.py.
>
<|||||>All checks have passed!
I appreciate all your help.
However, in the documentation of `_reorder_cache`, there are references to both `past_key_values` and `mems` regardless of which object is used.
I think we can fix that and only mention the one we use, or we can leave the reference to both to show that the aim of the function is the same.
If there is a need to modify it, please let me know.
<|||||>Hi @patrickvonplaten,
> I hope it's fine for you that I went into the PR to do some final fixes. Thanks a lot for cleaning this up :-)
Of course! Thank you for adding fixes to make this PR more valuable!<|||||>Awesome, merging - great job @forest1988 !<|||||>Thank you for your advice and encouraging comments!
It’s my pleasure to have opened this PR! |
transformers | 9,595 | closed | Order of inputs (difference between doc and output) | Hey,
when using a dictionary as model input does the order matters? Eg:
`model({"input_ids": input_ids, "token_type_ids": token_type_ids, "attention_mask": attention_mask})
`
and
`model({"input_ids": input_ids, "attention_mask": attention_mask, "token_type_ids": token_type_ids})
`
When using tokenizer I get a order different from the docstring and arguments order:
```
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
input_query = tokenizer(input_query,max_length=MAX_SEQ_lEN,padding="max_length",truncation=True,return_tensors="tf")
-> {"input_ids": input_ids, "token_type_ids": token_type_ids, "attention_mask": attention_mask}
```
v3.4
| 01-14-2021 14:29:15 | 01-14-2021 14:29:15 | Hi! Order doesn’t matter in a dictionary.
It only matters if you use the arguments as positional arguments, which is not recommended.<|||||>@LysandreJik
So, order does matter when using lists? What is now the right order?
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,594 | closed | why set masked_bias as -10000 in GPT2 | ## Information
`masked_bias` is set as `-10000` in GPT2, why not `-inf`?
https://github.com/huggingface/transformers/blob/e43f3b6190cfd98a38912411b8bc8ecbb6629280/src/transformers/models/gpt2/modeling_gpt2.py#L133
## openai/gpt-2
In [openai/gpt2](https://github.com/openai/gpt-2/blob/a74da5d99abaaba920de8131d64da2862a8f213b/src/model.py#L88), the bias is set as `-1e10`
```py
w = w*b - tf.cast(1e10, w.dtype)*(1-b)
```
## Other implementation, such as bert, transformer
https://github.com/huggingface/transformers/blob/82498cbc37d5c15520c7bddde5d804c804eee498/src/transformers/models/bart/modeling_bart.py#L81
<!-- A clear and concise description of what you would expect to happen. -->
| 01-14-2021 14:20:19 | 01-14-2021 14:20:19 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,593 | closed | Difference in decoded strings between a tokenizer and the corresponding fast tokenizer | ## Environment info
- `transformers` version: 4.2.0
- Platform: Linux-4.15.0-130-generic-x86_64-with-debian-10.5
- Python version: 3.7.8
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: @mfuntowicz
## Information
I want to feed a word-based sequence to a tokenizer and get a word-based output decoded from logits.
To leave spaces before punctuation marks, I specified `tokenizer.decode(ids, clean_up_tokenization_spaces=False)`, but a fast tokenizer removes such spaces while the corresponding non-fast tokenizer preserves them.
## To reproduce
```py
from transformers import BertTokenizer, BertTokenizerFast
seq = ['Cheerfully', ',', 'Hello', 'World', '!']
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
ids = tokenizer(seq, is_split_into_words=True).input_ids
print(ids) # => [101, 20394, 8284, 5834, 117, 8667, 1291, 106, 102]
print(tokenizer.decode(ids, clean_up_tokenization_spaces=False)) # => [CLS] Cheerfully , Hello World ! [SEP]
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
ids = tokenizer(seq, is_split_into_words=True).input_ids
print(ids) # => [101, 20394, 8284, 5834, 117, 8667, 1291, 106, 102]
print(tokenizer.decode(ids, clean_up_tokenization_spaces=False)) # => [CLS] Cheerfully, Hello World! [SEP]
```
This happens because the underlying tokenizer ([huggingface/tokenizers](https://github.com/huggingface/tokenizers/)) removes them at the [transformers/tokenization_utils_fast.py#L495](https://github.com/huggingface/transformers/blob/v4.2.0/src/transformers/tokenization_utils_fast.py#L495), whether `clean_up_tokenization_spaces` is `True` or `False`.
To avoid this issue, I tried to use `tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(ids))`, but this also did not work.
## Expected behavior
A tokenizer and its corresponding fast tokenizer must return the same decoded string.
| 01-14-2021 12:53:08 | 01-14-2021 12:53:08 | For the `WordPiece` decoder, which is used in `BertTokenizerFast`, It seems that `cleanup` cannot be changed after initialization.
https://github.com/huggingface/tokenizers/blob/python-v0.10.0/tokenizers/src/tokenizer/mod.rs#L762
https://github.com/huggingface/tokenizers/blob/python-v0.10.0/tokenizers/src/decoders/wordpiece.rs#L35
https://github.com/huggingface/tokenizers/blob/python-v0.10.0/bindings/python/py_src/tokenizers/decoders/__init__.pyi#L113
https://github.com/huggingface/transformers/blob/v4.2.0/src/transformers/convert_slow_tokenizer.py#L106
I confirmed that a tokenizer and the fast tokenizer return the same string when they are based on SentencePiece because it treats whitespace as a symbol and can reconstruct the original sentence.
So when specifying `clean_up_tokenization_spaces=False`, spaces before punctuation depend on `ids`, but there are no differences in the decoded string between a tokenizer (e.g. `T5Tokenizer`) and the fast tokenizer (e.g. `T5TokenizerFast`).<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,592 | closed | disable message "Some layers from the model checkpoint ..." | I wonder how can I disable this message? v3.4
```
Some layers from the model checkpoint at bert-base-cased were not used when initializing TFBertForSequenceClassification: ['nsp___cls', 'mlm___cls']
- This IS expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['dropout_113', 'classifier']
```
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. | 01-14-2021 11:53:50 | 01-14-2021 11:53:50 | You can change the logging level:
```py
from transformers import logging as hf_logging
hf_logging.set_verbosity_error()
``` |
transformers | 9,591 | closed | disable message "Some layers from the model checkpoint at bert-base-cased were not used when initializing" | I wonder how can I disable this message? v3.4
```
Some layers from the model checkpoint at bert-base-cased were not used when initializing TFBertForSequenceClassification: ['nsp___cls', 'mlm___cls']
- This IS expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['dropout_113', 'classifier']
```
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. | 01-14-2021 11:53:12 | 01-14-2021 11:53:12 | You can change the logging level:
```
from transformers import logging as hf_logging
hf_logging.set_verbosity_error()
``` |
transformers | 9,590 | closed | WARNING:tensorflow:AutoGraph | Since v4.2 I get those strange outputs while finetuning a TFBert Model:
Using
`bert_model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')`
```
WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fb902ce88d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: <cyfunction Socket.send at 0x7fb920685d90> is not a module, class, method, function, traceback, frame, or code object
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fb902ce88d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: <cyfunction Socket.send at 0x7fb920685d90> is not a module, class, method, function, traceback, frame, or code object
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`
```
I saw that the outputs are now different with return_dict=True in the new version. But I can still use model.predict (using TFBert within a keras model) to get the scores (seems working though)? So I wonder how does this correlate with returning dict when just using the model output via `model(input)`? So using TF Bert with predict gives still old behaviour?
I still get the normal TFSequenceClassifierOutput with the TFBertForSequenceClassification Model? What are now the exact changes with v4?
Would be the training results actually be different with the version 4?
Also, how can I disable above messages?
| 01-14-2021 10:56:47 | 01-14-2021 10:56:47 | Hello !
You can safely ignore those warnings, no worries.<|||||>@jplu Thanks. Might the results of training a model may different to new version (like because of the new kind of tokenizer)?
But "why" do I get the same output as in v3, since in the doc is stated that the ouput structure somehow chnaged and you cannot do unpacking like
`a, b, c = model(input)
`
But which stills works.
How can I ignore (disable) this messages? I tried a lot, but nothing worked!
<|||||>In graph mode you cannot get tuples anymore, the dict output is forced and you cannot disable this message for now. This will be possible in a future release, as it will be displayed only when you set yourself the `output_attentions`, `output_hidden_states` or `return_dict` yourself in the method call while running your model in graph mode.<|||||>@jplu Thanky. What do you mean by graph mode? As stated above, I still get the tuples as output?<|||||>You are not getting tuples, by doing:
```
a, b, c = model(input)
```
You are getting the keys of the dict.
By graph mode, I mean TensorFlow graph mode, and not eager mode.<|||||>@jplu Ah ok, that is why I still get the tuple, because by default in tf2 eager mode is activated, right?<|||||>Yes eager mode is activated by default, and no you don't get tuples, you get a dict because `return_dict` is set to `True` is all the configs by default.<|||||>Ôk, but then I am confused why above unpacked worked for, although return dict is true?<|||||>Because you can unpack a dict, and you get the keys of the dict.<|||||>This issue has been stale for 1 month. |
transformers | 9,589 | closed | Fix conda build | Conda build started failing when using `conda build`, using `conda-build` fixed this issue. | 01-14-2021 10:51:32 | 01-14-2021 10:51:32 | |
transformers | 9,588 | closed | Longformer version of RoBERTa error | ## Environment info
- `transformers` version: 4.2.0
- Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
Longformer/Reformer: @patrickvonplaten
## Information
Model I am using script to initialize Longformer starting from [HerBERT](https://huggingface.co/allegro/herbert-klej-cased-v1)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Install dependencies: `python3 -m pip install -r requirements.txt`.
2. Install apex according to [official documentation](https://github.com/NVIDIA/apex).
3. Run command `CUDA_VISIBLE_DEVICES=0 python3 convert_model_to_longformer.py --finetune_dataset conllu`.
We are using dataset in `.jsonl` format, each line contains 1 CoNLLu entry. It is converted using custom `LineByLineTextDataset` class to LineByLine format from current version of `transformers`. I've added this class to be able to use it in older version (v3.0.2).
Using suggested by author on [allenai/longformer](https://github.com/allenai/longformer) I've used `transformers` in version `3.0.2` and it works fine. But I would like to use recent models to convert them to Long* version and I can't make conversion script work.
## Result
As a result of running command above with `transformers` in version `4.2.0` I've got:
```bash
Traceback (most recent call last):
File "convert_model_to_longformer.py", line 277, in <module>
pretrain_and_evaluate(
File "convert_model_to_longformer.py", line 165, in pretrain_and_evaluate
eval_loss = trainer.evaluate()
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/trainer.py", line 1442, in evaluate
output = self.prediction_loop(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/trainer.py", line 1566, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/trainer.py", line 1670, in prediction_step
outputs = model(**inputs)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 1032, in forward
outputs = self.roberta(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 798, in forward
encoder_outputs = self.encoder(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 498, in forward
layer_outputs = layer_module(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 393, in forward
self_attention_outputs = self.attention(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 321, in forward
self_outputs = self.self(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "convert_model_to_longformer.py", line 63, in forward
return super().forward(hidden_states, attention_mask=attention_mask, output_attentions=output_attentions) # v4.2.0
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 600, in forward
diagonal_mask = self._sliding_chunks_query_key_matmul(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 789, in _sliding_chunks_query_key_matmul
batch_size, seq_len, num_heads, head_dim = query.size()
ValueError: too many values to unpack (expected 4)
```
I've changed function `/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py` up to line 789:
```bash
def forward(
self,
hidden_states,
attention_mask=None,
is_index_masked=None,
is_index_global_attn=None,
is_global_attn=None,
output_attentions=False,
):
"""
:class:`LongformerSelfAttention` expects `len(hidden_states)` to be multiple of `attention_window`. Padding to
`attention_window` happens in :meth:`LongformerModel.forward` to avoid redoing the padding on each layer.
The `attention_mask` is changed in :meth:`LongformerModel.forward` from 0, 1, 2 to:
* -10000: no attention
* 0: local attention
* +10000: global attention
"""
hidden_states = hidden_states.transpose(0, 1)
# project hidden states
query_vectors = self.query(hidden_states)
key_vectors = self.key(hidden_states)
value_vectors = self.value(hidden_states)
print(f"query_vectors: {query_vectors.shape}")
print(f"key_vectors: {key_vectors.shape}")
print(f"value_vectors: {value_vectors.shape}")
print(f"attention_mask: {attention_mask.shape}")
seq_len, batch_size, embed_dim = hidden_states.size()
assert (
embed_dim == self.embed_dim
), f"hidden_states should have embed_dim = {self.embed_dim}, but has {embed_dim}"
# normalize query
query_vectors /= math.sqrt(self.head_dim)
query_vectors = query_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1)
key_vectors = key_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1)
attn_scores = self._sliding_chunks_query_key_matmul(
query_vectors, key_vectors, self.one_sided_attn_window_size
)
# values to pad for attention probs
remove_from_windowed_attention_mask = (attention_mask != 0)[:, :, None, None]
# cast to fp32/fp16 then replace 1's with -inf
float_mask = remove_from_windowed_attention_mask.type_as(query_vectors).masked_fill(
remove_from_windowed_attention_mask, -10000.0
)
print(f"attn_scores: {attn_scores.shape}")
print(f"remove_from_windowed_attention_mask: {remove_from_windowed_attention_mask.shape}")
print(f"float_mask: {float_mask.shape}")
# diagonal mask with zeros everywhere and -inf inplace of padding
diagonal_mask = self._sliding_chunks_query_key_matmul(
float_mask.new_ones(size=float_mask.size()), float_mask, self.one_sided_attn_window_size
)
```
And as a result I've got:
```bash
attention_mask: torch.Size([2, 1, 1, 1024])
query_vectors: torch.Size([1024, 2, 768])
key_vectors: torch.Size([1024, 2, 768])
value_vectors: torch.Size([1024, 2, 768])
attn_scores: torch.Size([2, 1024, 12, 513])
remove_from_windowed_attention_mask: torch.Size([2, 1, 1, 1, 1, 1024])
float_mask: torch.Size([2, 1, 1, 1, 1, 1024])
```
And after changing version to `3.0.2` and adding print statements I've got:
```bash
attention_mask: torch.Size([2, 1024])
query_vectors: torch.Size([1024, 2, 768])
key_vectors: torch.Size([1024, 2, 768])
value_vectors: torch.Size([1024, 2, 768])
attn_scores: torch.Size([2, 1024, 12, 513])
remove_from_windowed_attention_mask: torch.Size([2, 1024, 1, 1])
float_mask: torch.Size([2, 1024, 1, 1])
```
So maybe it's problem with `_sliding_chunks_query_key_matmul` function?
## Files:
convert_model_to_longformer.py, based on [allenai/longformer/scripts/convert_model_to_long.ipynb](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb):
```python3
import logging
import os
import math
import copy
import torch
import argparse
from dataclasses import dataclass, field
from transformers import RobertaForMaskedLM, XLMTokenizer, TextDataset, DataCollatorForLanguageModeling, Trainer, XLMTokenizer, PreTrainedTokenizer
from transformers import TrainingArguments, HfArgumentParser, XLMTokenizer, RobertaModel, XLMTokenizer
from transformers import LongformerSelfAttention # v4.2.0
# from transformers.modeling_longformer import LongformerSelfAttention # v3.0.2
from conllu import load_conllu_dataset, save_conllu_dataset_in_linebyline_format
from torch.utils.data.dataset import Dataset
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
class LineByLineTextDataset(Dataset):
"""
This will be superseded by a framework-agnostic approach
soon.
"""
def __init__(self, tokenizer: PreTrainedTokenizer, file_path: str, block_size: int):
assert os.path.isfile(file_path)
# Here, we do not cache the features, operating under the assumption
# that we will soon use fast multithreaded tokenizers from the
# `tokenizers` repo everywhere =)
logger.info("Creating features from dataset file at %s", file_path)
with open(file_path, encoding="utf-8") as f:
lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]
batch_encoding = tokenizer(
lines,
add_special_tokens=True,
truncation=True,
padding="max_length",
max_length=block_size,
pad_to_multiple_of=512)
self.examples = batch_encoding["input_ids"]
def __len__(self):
return len(self.examples)
def __getitem__(self, i) -> torch.Tensor:
return torch.tensor(self.examples[i], dtype=torch.long)
class RobertaLongSelfAttention(LongformerSelfAttention):
def forward(
self,
hidden_states,
attention_mask=None,
head_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
past_key_value=None,
output_attentions=False,
):
return super().forward(hidden_states, attention_mask=attention_mask, output_attentions=output_attentions)
class RobertaLongForMaskedLM(RobertaForMaskedLM):
def __init__(self, config):
super().__init__(config)
for i, layer in enumerate(self.roberta.encoder.layer):
# replace the `modeling_bert.BertSelfAttention` object with `LongformerSelfAttention`
layer.attention.self = RobertaLongSelfAttention(config, layer_id=i)
class RobertaLongModel(RobertaModel):
def __init__(self, config):
super().__init__(config)
for i, layer in enumerate(self.encoder.layer):
# replace the `modeling_bert.BertSelfAttention` object with `LongformerSelfAttention`
layer.attention.self = RobertaLongSelfAttention(config, layer_id=i)
def create_long_model(initialization_model, initialization_tokenizer, save_model_to, attention_window, max_pos):
model = RobertaForMaskedLM.from_pretrained(initialization_model)
tokenizer = XLMTokenizer.from_pretrained(initialization_tokenizer, model_max_length=max_pos)
config = model.config
# extend position embeddings
tokenizer.model_max_length = max_pos
tokenizer.init_kwargs['model_max_length'] = max_pos
current_max_pos, embed_size = model.roberta.embeddings.position_embeddings.weight.shape
max_pos += 2 # NOTE: RoBERTa has positions 0,1 reserved, so embedding size is max position + 2
config.max_position_embeddings = max_pos
assert max_pos > current_max_pos
# allocate a larger position embedding matrix
new_pos_embed = model.roberta.embeddings.position_embeddings.weight.new_empty(max_pos, embed_size)
# copy position embeddings over and over to initialize the new position embeddings
k = 2
step = current_max_pos - 2
while k < max_pos - 1:
new_pos_embed[k:(k + step)] = model.roberta.embeddings.position_embeddings.weight[2:]
k += step
model.roberta.embeddings.position_embeddings.weight.data = new_pos_embed
model.roberta.embeddings.position_ids.data = torch.tensor([i for i in range(max_pos)]).reshape(1, max_pos) # v4.2.0
# model.roberta.embeddings.position_ids = torch.tensor([i for i in range(max_pos)]).reshape(1, max_pos) # v3.0.2
# replace the `modeling_bert.BertSelfAttention` object with `LongformerSelfAttention`
config.attention_window = [attention_window] * config.num_hidden_layers
for i, layer in enumerate(model.roberta.encoder.layer):
longformer_self_attn = LongformerSelfAttention(config, layer_id=i)
longformer_self_attn.query = copy.deepcopy(layer.attention.self.query)
longformer_self_attn.key = copy.deepcopy(layer.attention.self.key)
longformer_self_attn.value = copy.deepcopy(layer.attention.self.value)
longformer_self_attn.query_global = copy.deepcopy(layer.attention.self.query)
longformer_self_attn.key_global = copy.deepcopy(layer.attention.self.key)
longformer_self_attn.value_global = copy.deepcopy(layer.attention.self.value)
layer.attention.self = longformer_self_attn
logger.info(f'saving model to {save_model_to}')
model.save_pretrained(save_model_to)
tokenizer.save_pretrained(save_model_to)
return model, tokenizer
def copy_proj_layers(model):
for i, layer in enumerate(model.roberta.encoder.layer):
layer.attention.self.query_global = copy.deepcopy(layer.attention.self.query)
layer.attention.self.key_global = copy.deepcopy(layer.attention.self.key)
layer.attention.self.value_global = copy.deepcopy(layer.attention.self.value)
return model
def pretrain_and_evaluate(args, model, tokenizer, eval_only, model_path, max_size):
val_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path=args.val_datapath,
block_size=max_size,
)
if eval_only:
train_dataset = val_dataset
else:
logger.info(f'Loading and tokenizing training data is usually slow: {args.train_datapath}')
train_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path=args.train_datapath,
block_size=max_size,
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=True,
mlm_probability=0.15,
)
trainer = Trainer(
model=model,
args=args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=val_dataset,
# prediction_loss_only=True,
)
eval_loss = trainer.evaluate()
eval_loss = eval_loss['eval_loss']
logger.info(f'Initial eval bpc: {eval_loss/math.log(2)}')
exit(0)
if not eval_only:
trainer = Trainer(
model=model,
args=args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=val_dataset,
prediction_loss_only=False,
)
trainer.train(model_path=model_path)
trainer.save_model()
eval_loss = trainer.evaluate()
eval_loss = eval_loss['eval_loss']
logger.info(f'Eval bpc after pretraining: {eval_loss/math.log(2)}')
@dataclass
class ModelArgs:
attention_window: int = field(default=512, metadata={"help": "Size of attention window"})
max_pos: int = field(default=1024, metadata={"help": "Maximum position"})
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--finetune_dataset", required=True, choices=["conllu"], help="Name of dataset to finetune")
return parser.parse_args()
if __name__ == "__main__":
parser = HfArgumentParser((TrainingArguments, ModelArgs,))
args = parse_args()
training_args, model_args = parser.parse_args_into_dataclasses(look_for_args_file=False, args=[
'--output_dir', 'tmp_4.2.0',
'--warmup_steps', '500',
'--learning_rate', '0.00003',
'--weight_decay', '0.01',
'--adam_epsilon', '1e-6',
'--max_steps', '3000',
'--logging_steps', '500',
'--save_steps', '500',
'--max_grad_norm', '5.0',
'--per_device_eval_batch_size', '2',
'--per_device_train_batch_size', '2',
'--gradient_accumulation_steps', '4',
# '--evaluate_during_training',
'--do_train',
'--do_eval',
'--fp16',
'--fp16_opt_level', 'O2',
])
if args.finetune_dataset == "conllu":
saved_dataset = '/server/server_1/user/longformer_summary/conllu/'
if not os.path.exists(saved_dataset):
os.makedirs(saved_dataset)
dataset = load_conllu_dataset('/server/server_1/user/conllu_dataset/')
save_conllu_dataset_in_linebyline_format(dataset, saved_dataset)
training_args.val_datapath = os.path.join(saved_dataset, 'validation.txt')
training_args.train_datapath = os.path.join(saved_dataset, 'train.txt')
initialization_model = 'allegro/herbert-klej-cased-v1'
initialization_tokenizer = 'allegro/herbert-klej-cased-tokenizer-v1'
roberta_base = RobertaForMaskedLM.from_pretrained(initialization_model)
roberta_base_tokenizer = XLMTokenizer.from_pretrained(initialization_tokenizer, model_max_length=512)
model_path = f'{training_args.output_dir}/{initialization_model}-{model_args.max_pos}'
if not os.path.exists(model_path):
os.makedirs(model_path)
logger.info(f'Converting roberta-base into {initialization_model}-{model_args.max_pos}')
model, tokenizer = create_long_model(
initialization_model=initialization_model,
initialization_tokenizer=initialization_tokenizer,
save_model_to=model_path,
attention_window=model_args.attention_window,
max_pos=model_args.max_pos,
)
logger.info(f'Loading the model from {model_path}')
tokenizer = XLMTokenizer.from_pretrained(model_path)
model = RobertaLongForMaskedLM.from_pretrained(model_path)
logger.info(f'Pretraining {initialization_model}-{model_args.max_pos} ... ')
pretrain_and_evaluate(
training_args,
model,
tokenizer,
eval_only=False,
model_path=training_args.output_dir,
max_size=model_args.max_pos,
)
logger.info(f'Copying local projection layers into global projection layers... ')
model = copy_proj_layers(model)
logger.info(f'Saving model to {model_path}')
model.save_pretrained(model_path)
logger.info(f'Loading the model from {model_path}')
tokenizer = XLMTokenizer.from_pretrained(model_path)
model = RobertaLongModel.from_pretrained(model_path)
```
conllu.py
```python3
import re
import glob
import torch
from torch.utils.data import Dataset
import time
import os
import json
from xml.etree.ElementTree import ParseError
import xml.etree.ElementTree as ET
from typing import List, Dict
from sklearn.model_selection import train_test_split
def load_conllu_jsonl(
path: str,
) -> List[Dict[str, str]]:
dataset: List[Dict[str, str]] = list()
with open(path, 'r') as f:
for jsonl in f.readlines():
json_file = json.loads(jsonl)
conllu = json_file['conllu'].split('\n')
doc_text: str = ""
utterance: Dict[str, str] = dict()
for line in conllu:
try:
if line[0].isdigit():
if utterance:
masked_text = utterance["text"]
doc_text = f"{doc_text} {masked_text}.".strip()
utterance = dict()
elif line[0] == '#':
text = line[1:].strip()
key = text.split('=')[0].strip()
value = text.split('=')[1].strip()
utterance[key] = value
except IndexError:
pass
dataset.append({"text": doc_text})
return dataset
def load_conllu_dataset(
path: str,
train_test_val_ratio: float = 0.1,
) -> Dict[str, List[Dict[str, str]]]:
dataset: Dict[str, List[Dict[str, str]]] = dict()
data_dict: Dict[str, List[str]] = dict()
filepath_list = glob.glob(os.path.join(path, '*.jsonl'))
train = filepath_list[:int(len(filepath_list)*0.8)]
test = filepath_list[int(len(filepath_list)*0.8):int(len(filepath_list)*0.9)]
val = filepath_list[int(len(filepath_list)*0.9):]
data_dict["test"] = test
data_dict["train"] = train
data_dict["validation"] = val
for key, value in data_dict.items():
dataset_list: List[Dict[str, str]] = list()
for filepath in value:
data = load_conllu_jsonl(path=filepath)
if data:
dataset_list.extend(data)
dataset[key] = dataset_list
return dataset
def save_conllu_dataset_in_linebyline_format(
dataset: Dict[str, List[Dict[str, str]]],
save_dir: str,
) -> None:
for key, value in dataset.items():
with open(os.path.join(save_dir, f'{key}.txt'), 'w') as f:
for line in value:
# print(line["full"])
f.write(f'{line["text"]}\n')
```
requirements.txt:
```bash
apex @ file:///server/server_1/user/apex
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
datasets==1.2.0
dill==0.3.3
filelock==3.0.12
idna==2.10
joblib==1.0.0
multiprocess==0.70.11.1
numpy==1.19.4
packaging==20.8
pandas==1.2.0
pyarrow==2.0.0
pyparsing==2.4.7
python-dateutil==2.8.1
pytz==2020.5
regex==2020.11.13
requests==2.25.1
sacremoses==0.0.43
sentencepiece==0.1.94
six==1.15.0
tokenizers==0.8.1rc1
torch==1.7.1
tqdm==4.49.0
transformers==3.0.2
typing-extensions==3.7.4.3
urllib3==1.26.2
xxhash==2.0.0
```
## Expected behavior
Model should be converted, saved and loaded. After that it should be properly fine-tuned and saved on disk.
| 01-14-2021 10:08:40 | 01-14-2021 10:08:40 | Comparing codebase of version `3.0.2` and `4.2.0` I have noticed that `forward` function differs. I have added deleted lines right at the beginning of the function:
```python
def forward(
self,
hidden_states,
attention_mask=None,
is_index_masked=None,
is_index_global_attn=None,
is_global_attn=None,
output_attentions=False,
):
"""
:class:`LongformerSelfAttention` expects `len(hidden_states)` to be multiple of `attention_window`. Padding to
`attention_window` happens in :meth:`LongformerModel.forward` to avoid redoing the padding on each layer.
The `attention_mask` is changed in :meth:`LongformerModel.forward` from 0, 1, 2 to:
* -10000: no attention
* 0: local attention
* +10000: global attention
"""
attention_mask = attention_mask.squeeze(dim=2).squeeze(dim=1)
# is index masked or global attention
is_index_masked = attention_mask < 0
is_index_global_attn = attention_mask > 0
is_global_attn = any(is_index_global_attn.flatten())
```
and now model seems to be working, but returns:
```bash
{'eval_loss': nan, 'eval_runtime': 20.6319, 'eval_samples_per_second': 1.939}
```
Below You can find results of consecutive steps in `forward` function. Can You see something wrong here?
```bash
diagonal_mask: tensor([[[[-inf, -inf, -inf, ..., 0., 0., 0.]],
[[-inf, -inf, -inf, ..., 0., 0., 0.]],
[[-inf, -inf, -inf, ..., 0., 0., 0.]],
...,
[[0., 0., 0., ..., -inf, -inf, -inf]],
[[0., 0., 0., ..., -inf, -inf, -inf]],
[[0., 0., 0., ..., -inf, -inf, -inf]]],
[[[-inf, -inf, -inf, ..., 0., 0., 0.]],
[[-inf, -inf, -inf, ..., 0., 0., 0.]],
[[-inf, -inf, -inf, ..., 0., 0., 0.]],
...,
[[0., 0., 0., ..., -inf, -inf, -inf]],
[[0., 0., 0., ..., -inf, -inf, -inf]],
[[0., 0., 0., ..., -inf, -inf, -inf]]]], device='cuda:0',
dtype=torch.float16)
attn_scores: tensor([[[[ -inf, -inf, -inf, ..., 0.5771, 0.2065, -1.0449],
[ -inf, -inf, -inf, ..., -1.3174, -1.5547, -0.6240],
[ -inf, -inf, -inf, ..., -1.3691, -1.3555, -0.3799],
...,
[ -inf, -inf, -inf, ..., 1.7402, 1.6152, 0.8242],
[ -inf, -inf, -inf, ..., 0.5122, 1.0342, 0.2091],
[ -inf, -inf, -inf, ..., 1.7568, -0.1534, 0.7505]],
[[ -inf, -inf, -inf, ..., -0.8066, -1.7480, -2.5527],
[ -inf, -inf, -inf, ..., -3.3652, 0.1046, -0.5811],
[ -inf, -inf, -inf, ..., -0.0958, -1.0957, -0.2377],
...,
[ -inf, -inf, -inf, ..., -0.4148, -0.9497, -0.1229],
[ -inf, -inf, -inf, ..., -1.9443, -1.3467, -1.5342],
[ -inf, -inf, -inf, ..., 0.1263, -0.4407, 0.1486]],
[[ -inf, -inf, -inf, ..., -0.9077, -0.1603, -0.5762],
[ -inf, -inf, -inf, ..., -0.2454, 0.1932, -0.5034],
[ -inf, -inf, -inf, ..., -1.4375, -1.2793, -1.0488],
...,
[ -inf, -inf, -inf, ..., -0.3452, 0.1405, 1.3643],
[ -inf, -inf, -inf, ..., -0.2168, -1.0000, -0.9956],
[ -inf, -inf, -inf, ..., -1.7451, 0.1410, -0.6221]],
...,
[[-1.3965, 0.7798, 0.4707, ..., -inf, -inf, -inf],
[ 0.6260, -0.4146, 0.9180, ..., -inf, -inf, -inf],
[ 0.4807, -1.0742, 1.2803, ..., -inf, -inf, -inf],
...,
[ 0.0909, 0.8022, -0.4170, ..., -inf, -inf, -inf],
[-2.6035, -1.2988, 0.5586, ..., -inf, -inf, -inf],
[-0.6953, -0.8232, 0.0436, ..., -inf, -inf, -inf]],
[[ 1.0889, -0.2776, -0.0632, ..., -inf, -inf, -inf],
[-0.4128, 0.4834, -0.3848, ..., -inf, -inf, -inf],
[-0.8794, 0.9150, -1.5107, ..., -inf, -inf, -inf],
...,
[ 0.8867, -0.4731, 0.3389, ..., -inf, -inf, -inf],
[-0.1365, 0.4905, -2.0000, ..., -inf, -inf, -inf],
[-0.0205, -0.5464, -0.6851, ..., -inf, -inf, -inf]],
[[ nan, nan, nan, ..., -inf, -inf, -inf],
[ nan, nan, nan, ..., -inf, -inf, -inf],
[ nan, nan, nan, ..., -inf, -inf, -inf],
...,
[ nan, nan, nan, ..., -inf, -inf, -inf],
[ nan, nan, nan, ..., -inf, -inf, -inf],
[ nan, nan, nan, ..., -inf, -inf, -inf]]],
[[[ -inf, -inf, -inf, ..., -4.0469, -2.6270, -5.4805],
[ -inf, -inf, -inf, ..., -0.9312, -0.6743, -1.9688],
[ -inf, -inf, -inf, ..., -0.0593, -0.9507, -0.6392],
...,
[ -inf, -inf, -inf, ..., 0.3105, 2.3926, 1.0664],
[ -inf, -inf, -inf, ..., -0.0166, 2.2754, 1.0449],
[ -inf, -inf, -inf, ..., -0.4224, 1.7686, -0.2603]],
[[ -inf, -inf, -inf, ..., -0.5088, -1.2666, -0.4363],
[ -inf, -inf, -inf, ..., -0.3823, -1.7998, -0.4504],
[ -inf, -inf, -inf, ..., -0.1525, 0.1614, -0.0267],
...,
[ -inf, -inf, -inf, ..., 0.0225, -0.5737, 0.2318],
[ -inf, -inf, -inf, ..., 0.7139, 0.6099, 0.3767],
[ -inf, -inf, -inf, ..., 0.2008, -0.6714, 0.5869]],
[[ -inf, -inf, -inf, ..., -0.9302, -1.5303, -2.7637],
[ -inf, -inf, -inf, ..., -0.1124, -0.5850, 0.0818],
[ -inf, -inf, -inf, ..., -1.5176, -1.7822, -0.9111],
...,
[ -inf, -inf, -inf, ..., -0.3618, 0.3486, 0.4368],
[ -inf, -inf, -inf, ..., -0.4158, -1.1660, -0.9106],
[ -inf, -inf, -inf, ..., -0.4636, -0.7012, -0.9570]],
...,
[[-1.0137, -1.2324, -0.2091, ..., -inf, -inf, -inf],
[ 0.0793, 0.1862, -0.6162, ..., -inf, -inf, -inf],
[ 0.2406, 0.1237, -1.0420, ..., -inf, -inf, -inf],
...,
[ 0.5308, 0.3862, 0.9731, ..., -inf, -inf, -inf],
[-0.5752, -0.8174, 0.4766, ..., -inf, -inf, -inf],
[-0.4299, -0.7031, -0.6240, ..., -inf, -inf, -inf]],
[[-2.9512, -1.0410, 0.9194, ..., -inf, -inf, -inf],
[-0.0306, -0.8579, 0.1930, ..., -inf, -inf, -inf],
[ 0.2927, -1.4600, -1.6787, ..., -inf, -inf, -inf],
...,
[ 0.6128, -0.8921, 1.2861, ..., -inf, -inf, -inf],
[-0.7778, -0.8564, 2.3457, ..., -inf, -inf, -inf],
[-0.8877, -1.4834, 0.7783, ..., -inf, -inf, -inf]],
[[ nan, nan, nan, ..., -inf, -inf, -inf],
[ nan, nan, nan, ..., -inf, -inf, -inf],
[ nan, nan, nan, ..., -inf, -inf, -inf],
...,
[ nan, nan, nan, ..., -inf, -inf, -inf],
[ nan, nan, nan, ..., -inf, -inf, -inf],
[ nan, nan, nan, ..., -inf, -inf, -inf]]]],
device='cuda:0', dtype=torch.float16)
``` <|||||>Hey @adamwawrzynski,
sadly we cannot maintain `convert_model_to_longformer.py` as I think it's not in the core transformers library `src/transformers/...`. Feel free to ask your question on the forum: https://discuss.huggingface.co/ though - maybe someone from the community wants to help<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,587 | closed | How to fine-tune T5/Bart for other languages on summarization? | Assume that I have a Japanese dataset for fine-tuning. How could I fine-tune it?
I think that the original tokenizer like `BartTokenizer` or `T5Tokenizer` can't be used for Japanese, right?
So is it possible to use a Japanese tokenizer like `BertJapaneseTokenizer` to fine-tune a Bart model? Please give me some advice. Thank you very much. | 01-14-2021 08:58:37 | 01-14-2021 08:58:37 | Hi! The `BertJapaneseTokenizer` you mention was created specifically for Japanese, so it should encode the Japanese language well.
You can find the list of models that have Japanese checkpoints [here](https://huggingface.co/models?filter=ja).<|||||>Tokenizers can be decoupled from their models, so you can indeed use a BERT tokenizer with a BART model; however, this requires the tokenizer and model to be trained together.<|||||>Thanks for the quick reply!
I don't know the exact procedure to train tokenizer and model **together**. Could you explain it in detail? |
transformers | 9,586 | closed | [bugs] 1. fix chinese_ref column will ignore even we add it in to Datasets. | [bugs] 1. fix chinese_ref column will ignore even we add it in to Datasets.
[bugs] 2. DataCollatorForWholeWordMask e["chinese_ref"] is a list, fix the get length method.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
I follow the examples/language-modeling/run_mlm_wwm.py, chinese whole word mask, found chinese_ref column not used even i add it into datasets, and because it has been removed by trainer.py function _remove_unused_columns()
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-14-2021 08:56:17 | 01-14-2021 08:56:17 | You can avoid removing that column by setting `remove_unused_columns=False` in your `TrainingArguments`.<|||||>I will try it,thank you so much |
transformers | 9,585 | closed | Gradient accumulation for TFTrainer | # What does this PR do?
```TFTrainer``` does not work with ```gradient_accumulation_steps``` > 1 (I am doing with ```TFGPT2LMHeadModel```).
Similar treatment of #6479 is done for labels.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
tensorflow: @jplu | 01-14-2021 08:47:50 | 01-14-2021 08:47:50 | Sorry we have to revert this PR as `labels` is not a dict but a tensor and makes fails all our examples.<|||||>Thanks a lot for having spotted this cas, a more adapted fix will be available here #9616 very sorry for the inconvenience.<|||||>Never mind. Thanks again for the fix. |
transformers | 9,584 | closed | BatchEncoding.to with device with tests | Closes https://github.com/huggingface/transformers/issues/9580
The `torch` module isn't imported directly in the `tokenization_utils.py` file. In a similar fashion to the tensor checks, this PR adds a device check to identify if a variable is a torch device.
Adds a test that fails previous to this PR. | 01-14-2021 08:33:57 | 01-14-2021 08:33:57 | |
transformers | 9,583 | closed | Custom mask when performing forward pass | Suppose I have a sequence that consists of 2 sentences separated by \<\/SEP\> tokens like A \<\/SEP\> B. When performing forward pass with RoBERTa model, I want tokens in sentence A only attend to tokens in sentence A and vice versa for sentence B. The mask will be look like this:

In summary, is there any way to explicitly pass a custom attention mask to the model? | 01-14-2021 07:09:11 | 01-14-2021 07:09:11 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 9,582 | closed | [deepspeed doc] install issues + 1-gpu deployment | This PR extends the DeepSpeed/FairScale integration documentation to:
* add extensive general troubleshooting for CUDA-extensions (applies to fairscale, deepspeed, apex or any other python pytorch extension with CUDA C++ code) - these are very likely to be encountered by our users - all notes are based on my first hand encounters with these issues - 2 of which I run into yesterday while trying to build fairscale and deepspeed on Sylvain's hardware which he let me use to run the recent benchmarks. so I figured others are likely to have similar issues and neither fairscale nor deepspeed have these documented anywhere.
* adds deployment for 1 gpu DeepSpeed notes
* reformats sub-headers so that it's easier to link to specific sections
@sgugger | 01-14-2021 05:45:10 | 01-14-2021 05:45:10 | Thank you for your awesome suggestions and tweaks - all done. |
transformers | 9,581 | closed | A question about the weight decay | https://github.com/huggingface/transformers/blob/7729ef738161a0a182b172fcb7c351f6d2b9c50d/examples/run_squad.py#L90
Should this by `layer_norm.weight`? Even seems you are not using weight decay at all. | 01-14-2021 05:33:24 | 01-14-2021 05:33:24 | |
transformers | 9,580 | closed | BatchEncoding.to() throwing torch NameError in 4.2.0; identical code works in 4.1.1 | ## Environment info
- `transformers` version: 4.2.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Haven't explicitly set up any parallelization other than GPU acceleration and not sure it's relevant since this is an error in the tokenizer
This is on Google Colab with a GPU by the way.
### Who can help
@mfuntowicz (tokenizers)
@sgugger (recent commits to the relevant file)
## Information
Model I am using (Bert, XLNet ...): ALBERT (but problem seems to be in tokenizer)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
See script in reproduce section.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Sequence classification, but the problem arises when transporting a BatchEncoding object to a certain torch device.
## To reproduce
Steps to reproduce the behavior:
Run [this colab notebook](https://colab.research.google.com/drive/1Lpu8wE8-1SKGuVLpRhK8VIy4dOWvKteF?usp=sharing). Alternatively...
1. Create a colab instance with GPU acceleration
2. Install torch, sentencepiece, transformers==4.2.0
3. Run the code below
```python
import torch
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
tokens = tokenizer('hello world', return_tensors='pt')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokens = tokens.to(device)
```
There is no output when running 4.1.1 (expected) but the output when running 4.2.0 is below:
```
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-2-ad769dc72ebd> in <module>()
6 tokens = tokenizer('hello world', return_tensors='pt')
7 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
----> 8 tokens = tokens.to(device)
1 frames
/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in wrapper(*args, **kwargs)
1302 def wrapper(*args, **kwargs):
1303 if is_torch_available():
-> 1304 return func(*args, **kwargs)
1305 else:
1306 raise ImportError(f"Method `{func.__name__}` requires PyTorch.")
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in to(self, device)
802 # Otherwise it passes the casts down and casts the LongTensor containing the token idxs
803 # into a HalfTensor
--> 804 if isinstance(device, str) or isinstance(device, torch.device) or isinstance(device, int):
805 self.data = {k: v.to(device=device) for k, v in self.data.items()}
806 else:
NameError: name 'torch' is not defined
```
## Expected behavior
There should be no console output and the tokens should be transferred to the correct device. The code works perfectly fine in version 4.1.1 of `transformers`.
I'll roll back to 4.1.1 for now, looking forward to any updates. Thanks! | 01-14-2021 01:21:57 | 01-14-2021 01:21:57 | Hi, thanks for raising an issue!
Indeed, this is problematic. We're going to do a patch release this morning (v4.2.1) with a fix for this.<|||||>Ths fix is here: https://github.com/huggingface/transformers/pull/9584
It should be merged in a couple of hours, after which we'll release a patch. |
transformers | 9,579 | closed | Some weights of XLMRobertaForMaskedLM were not initialized from the model checkpoint at xlm-roberta-base and are newly initialized | Where can I find weight that won't give the following error?
The code:
model = XLMRobertaForMaskedLM.from_pretrained('xlm-roberta-base') | 01-14-2021 00:51:31 | 01-14-2021 00:51:31 | Could you put the full error in the description of the issue rather than in the title? We don't know which weights are not initialized.<|||||>> Could you put the full error in the description of the issue rather than in the title? We don't know which weights are not initialized.
Code:
model = XLMRobertaForMaskedLM.from_pretrained('xlm-roberta-base')
Warning:
Some weights of XLMRobertaForMaskedLM were not initialized from the model checkpoint at xlm-roberta-base and are newly initialized: ['lm_head.decoder.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.<|||||>You don't need to worry about that message, it lets you know that the bias in the LM head is not initialized - it will be initialized to all zeros.
I'm removing this warning in #9615. |
transformers | 9,578 | closed | Fix Trainer with a parallel model | # What does this PR do?
The test introduced in #9566 wasn't actually working as the default batch size is 8, not 16...
So the problem was still there, the reason because `_setup_devices` in `TrainingArguments` is a `cached_property`, so its result is computed once and for all at init. Had to change the behavior slightly, but it should be okay since it's a private method.
Fixes #9577 (model is getting wrapped into DataParallel because the value of `self.args.n_gpu` is not updated. | 01-13-2021 23:37:53 | 01-13-2021 23:37:53 | |
transformers | 9,577 | closed | Trainer is using DataParallel on parallelized models | ## Environment info
- `transformers` version: 4.2.0
- Platform: Ubuntu 20.04
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 / CUDA 11.2
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@sgugger @stas00
## Information
I'm trying out the 4.2.0 release with a training script that had been working in 4.1.1.
I'm parallelizing my model over two GPUs, and I had been using the `--model_parallel` training arg in the previous version. Now that it's no longer used, I removed the arg from my training command, but I'm getting an error as though the DataParallel is being used and the model isn't being detected as parallelized:
`RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1`
I did some debugging, and everything seems okay with my model (`trainer. is_model_parallel` returns True). But the `trainer. args.n_gpu` is still 2.
I admit that I don't totally understand what's happening in the trainer code, but it might be an error on line 289?
[`self.args._n_gpu = 1`](https://github.com/huggingface/transformers/blob/126fd281bc309ec29caef99e982640265c8a4fba/src/transformers/trainer.py#L289)
Should that be `self.args.n_gpu = 1`, without the leading underscore?
## To reproduce
Steps to reproduce the behavior:
1. Parallelize a model
2. Train on a machine with multiple GPUs
| 01-13-2021 22:10:49 | 01-13-2021 22:10:49 | The `self.args._n_gpu = 1` is to avoid parallelizing the data so it has nothing to do with your problem (and it is right, we can't set `self.args.n_gpu` which is a property but that's a whole different story!)
How is your model parallelized? Without that piece of code we can't reproduce the bug and help you.<|||||>Thanks @sgugger.
In my test, I'm using some code originally derived from the run_clm.py example. I'm trying to fine-tune a GPT2 model I've trained from scratch. The model was parallelized with the following lines, and this exact fine-tuning script ran successfully yesterday in 4.1.1, using the `--model_parallel` training arg.
```
device_map = {0: range(0, 15),
1: range(15, 32)}
model.parallelize(device_map)
```
The error I'm getting now looks a lot like what would happen if I left out the `--model_parallel` flag in 4.1.1.
<|||||>> RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1
Please post the full trace.
I have only experimented with t5 and bart MP so far, but gpt2 is supposed to be very similar.
Most likely the outputs aren't being copied back to the 0th gpu on return, so this won't have anything to do with the trainer. Most likely the issue you encountered has to do with evaluation and not training.
I had to fix t5-MP to do that, but the PR with the fix hasn't been merged.
https://github.com/huggingface/transformers/blob/58d047a596a97fbb815acb3e657102bf1960b06a/src/transformers/models/t5/modeling_t5.py#L1263-L1266
I won't be surprised if gpt2 is missing that too.
`model_parallel_inputs_to_specific_device` is a new function that isn't in master, but part of these 2 PRs: https://github.com/huggingface/transformers/pull/9323 and https://github.com/huggingface/transformers/pull/9384 - it relies on another function - the full new file is here: https://github.com/huggingface/transformers/blob/fe21c43745fcf3f7958c17c2ac461bd784094205/src/transformers/utils/model_parallel_utils.py
The current MP implementations are very limited and at the moment I highly recommend you look at DeepSpeed instead, see:
https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685 and
https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400
You will need master for that as it was just merged 2 days ago.
We also removed `--model_parallel` in trainer master as it wasn't fully baked in first place.<|||||>@stas00 This is linked to how `TrainingArguments.n_gpu` was computed. Could reproduce and test the fix in #9578 removes the bug.<|||||>That's easy then. The error though very much reminded me of the issue I described in my comment above.<|||||>Thanks both!
@stas00 Definitely excited to check out DeepSpeed – that's the reason I started testing my code in 4.2.0 |
transformers | 9,576 | closed | Pipeline - Truncation Keyword not Recognized | ## Environment info
- `transformers` version: 4.2.0
- Platform: Linux-5.4.0-58-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
and
- `transformers` version: 4.2.0
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@Narsil @sgugger @LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I tried to run both of the code snippets below and got the following error. The pipeline code looks like it should pass everything through correctly, but it doesn't. Maybe the __call__ function needs to be setup as it is in https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/text2text_generation.py#L59.
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.tokenization_utils import TruncationStrategy
model = AutoModelForSequenceClassification.from_pretrained("/path/to/model/dir")
tokenizer = AutoTokenizer.from_pretrained("/path/to/model/dir")
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer, return_all_scores=True, truncation=TruncationStrategy.LONGEST_FIRST)
results = nlp(narratives)
```
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.tokenization_utils import TruncationStrategy
kwargs = {}
kwargs["truncation"] = TruncationStrategy.LONGEST_FIRST
model = AutoModelForSequenceClassification.from_pretrained("/path/to/model/dir")
tokenizer = AutoTokenizer.from_pretrained("/path/to/model/dir")
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer, return_all_scores=True, **kwargs)
results = nlp(narratives)
```
```
Traceback (most recent call last):
File "/ptce/evaluate.py", line 102, in <module>
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer, return_all_scores=True, truncation=TruncationStrategy.LONGEST_FIRST)
File "/usr/local/lib/python3.6/dist-packages/transformers/pipelines/__init__.py", line 418, in pipeline
return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/pipelines/text_classification.py", line 39, in __init__
super().__init__(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'truncation'
```
## Expected behavior
For `truncation` to pass from `super().__call__(*args, **kwargs)` to `__call__(self, *args, **kwargs)` and then to `_parse_and_tokenize(self, inputs, padding=True, add_special_tokens=True, truncation=TruncationStrategy.DO_NOT_TRUNCATE, **kwargs)` where the default value is overwritten and text longer than max_sequence_length narratives are truncated.
| 01-13-2021 21:49:43 | 01-13-2021 21:49:43 | The [documentation](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.pipeline) of the pipeline function clearly shows the `truncation` argument is not accepted, so i'm not sure why you are filing this as a bug.
The `__call__` method of a class is not what is used when you create it but when you... well, call it. So `results = nlp(narratives, **kwargs)` will probably work better.<|||||>@sgugger , you're right. Thanks for the quick response. Sorry, while I looked at https://github.com/huggingface/transformers/pull/9432, I didn't look close enough at https://github.com/huggingface/transformers/blob/master/tests/test_pipelines_summarization.py#L78 or the updated docs. It works now! Thanks @Narsil for adding this feature. |
transformers | 9,575 | closed | Converting original BERT tf checkpoints to BertForMaskedLM | Hi! I have some BERT models that I've trained using the original Google code for BERT, and I was hoping to port them over to `transformers`. I noticed that there are two scripts to do this conversion: one for [the original tf1.x code](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py), and one for [the new tf2 code](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_original_tf2_checkpoint_to_pytorch.py).
I noticed that tf2 conversion script has the following comment:
> You may adapt this script to include classification/MLM/NSP/etc. heads.
I'm using tf1.x, and that comment isn't the tf1.x conversion script. However, the output model is a `BertForPreTraining`, and I'd like to port the entire MLM head over too. I'm assuming that I'd need to somehow get a `BertForMaskedLM` in order to keep the MLM head.
Questions:
1. Would I have to make any modifications to the tf1.x conversion script other than swapping `BertForPreTraining` -> `BertForMaskedLM`?
2. I also noticed that the BERT configs on the model hub are slightly different than the original Google configs. Is there any additional processing that I'd need to do to convert my configs too, so that they can be loaded by the `Auto*` classes?
Thank you! | 01-13-2021 21:33:41 | 01-13-2021 21:33:41 | The `BertForPreTraining` contains two heads: the NSP and the MLM heads. Therefore, by using the tf1 conversion script, you're already porting the entire model!
For the configuration, it should align pretty seamlessly to Google's configurations, but you can check the expected field here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/configuration_bert.py#L120-L138<|||||>Great, thank you! For anyone who's curious, I ended up updating my config file with a few extra fields from here to make `Auto*` detection work: https://huggingface.co/bert-base-multilingual-cased/blob/main/config.json |
transformers | 9,574 | closed | Upstream (and rename) sortish sampler | # What does this PR do?
This PR moves the logic of the "sortish sampler" from examples/seq2seq utils to `trainer_pt_utils` to make this behavior available for all types of training in the main `Trainer`. It also fixes a bug in the previous implementation of the distributed sortish sampler that did not synchronize the random generator used for the shuffling (thus the data returned on the two processes joined together was not a permutation of the whole dataset).
The sortish sampler logic is to group items of the training datasets that have similar lengths together to minimize padding while retaining a bit of randomness. It does some sorting for this, but that's not the main feature, and it's unclear for anyone reading it what it might do, so the argument name was badly chose in my opinion. I chose to name it `group_by_length` when introducing it in `TrainingArguments` (while keeping the old `sortish_sampler` argument in `Seq2SeqTrainingArguments` for backward compatibility, for now).
The actual samplers are given by the two introduced classes `LengthGroupedSampler` and `DistributedLengthGroupedSampler`. They are both tested in the test, and in particular, the distributed one has a test it is using the same random generator for the bit of randomness.
Renaming the old `sortish_sampler` arg is just done in the tests of seq2seq examples for now, it will be done more generally when the seq2seq finetuning script is rewritten to use `datasets`. | 01-13-2021 20:51:28 | 01-13-2021 20:51:28 | |
transformers | 9,573 | closed | Multilingual MiniLM | Hello everyone!
I am trying to load this model from Microsoft using the path provided [here](huggingface.co/microsoft/Multilingual-MiniLM-L12-H384). I am applying the same code provided there:
`tokenizer = AutoTokenizer.from_pretrained("microsoft/Multilingual-MiniLM-L12-H384")`
But I am facing this error message:
`stat: path should be string, bytes, os.PathLike or integer, not NoneType`
My intuition says that the model is not correctly stored on the server, but I am not sure. | 01-13-2021 19:41:18 | 01-13-2021 19:41:18 | Alright! I found the solution. For the tokenizer, XLMRobertaTokenizer should be used instead of AutoTokenizer. <|||||>Thanks for reporting. Now that we have model versioning, the author(s) of [`"microsoft/Multilingual-MiniLM-L12-H384"`](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) could update the model's config.json to specify a `tokenizer_class` so that AutoTokenizer works out of the box.
@JetRunner @LysandreJik do you remember who the model author(s) are?<|||||>I think I did the uploading. I'll update the config tomorrow!<|||||>I believe the config is not yet updated because the error is still there<|||||>@sersoage Re-uploading it now. Thanks for the note!<|||||>Done<|||||>@JetRunner Thank you! |
transformers | 9,572 | closed | How to train the models in smaller spochs | Hi
I am under low compute hours, could you tell me how I can train finetune_trainer.py for smaller iterations and then continue retraining from the saved checkpoint to reproduce the same results as full training? what are the cares to be taken, and things to pay attention, thanks | 01-13-2021 18:31:51 | 01-13-2021 18:31:51 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks!<|||||>yes,. I asked but forum is inactive, please do not close it I really need
help on this.
On Thu, Jan 14, 2021 at 9:35 AM Lysandre Debut <[email protected]>
wrote:
> Hello, thanks for opening an issue! We try to keep the github issues for
> bugs/feature requests.
> Could you ask your question on the forum <https://discusss.huggingface.co>
> instead?
>
> Thanks!
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/9572#issuecomment-760020852>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AM3GZM3PY5THOZ4TKXEDGB3SZ2UFXANCNFSM4WBHELJA>
> .
>
<|||||>please do not close it I really need help on this and forrum is inactive<|||||>Our github issue policy is detailed in the related [ISSUE.md](https://github.com/huggingface/transformers/blob/master/ISSUES.md) and very clearly indicate we reserve the issue tracker for bugs and features request which this issue is not, as well as several other issues you have recently opened.
Feel free to post here a link to the thread you should open on the forum if you want to be visible in both location (though this should stay very exceptional).
The forum is NOT inactive, I see that you already have several answers to your [related post](https://discuss.huggingface.co/t/training-models-for-smaller-epochs-and-then-continue-trianing/3153). This is the place for discussion. NOT here in the issues.
Overall, please note that if you persist in not following the guidelines and open-source collaboration policies that we have defined and shared with the community on the repository in the [CODE_OF_CONDUCT](https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md), the [CONTRIBUTING](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) and [ISSUES](https://github.com/huggingface/transformers/blob/master/ISSUES.md) documents, we reserve the right to take the moderation actions advised by GitHub in the [Community guidelines](https://docs.github.com/en/free-pro-team@latest/github/site-policy/github-community-guidelines).<|||||>Hi Thomas,
I did received some response, but those response were not helpful, this
needs a response from someone developed the codes, to know the small
details which can help,
please assist me with the issue, is there a way I could get a better help
in forume? so someone with more knowledge see the question?
thanks
On Thu, Jan 14, 2021 at 1:54 PM Thomas Wolf <[email protected]>
wrote:
> Our github issue policy is detailed in the related ISSUE.md
> <https://github.com/huggingface/transformers/blob/master/ISSUES.md> and
> very clearly indicate we reserve the issue tracker for bugs and features
> request which this issue is not, as well as several other issues you have
> recently opened.
>
> Feel free to post here a link to the thread you should open on the forum
> if you want to be visible in both location (though this should stay very
> exceptional).
>
> The forum is NOT inactive, I see that you already have several answers to
> your related post
> <https://discuss.huggingface.co/t/training-models-for-smaller-epochs-and-then-continue-trianing/3153>.
> This is the place for discussion. NOT here in the issues.
>
> Overall, please note that if you persist in not following the guidelines
> and open-source collaboration policies that we have defined and shared with
> the community on the repository in the CODE_OF_CONDUCT
> <https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md>,
> the CONTRIBUTING
> <https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md>
> and ISSUES
> <https://github.com/huggingface/transformers/blob/master/ISSUES.md>
> documents, we reserve the right to take the moderation actions advised by
> GitHub in the Community guidelines
> <https://docs.github.com/en/free-pro-team@latest/github/site-policy/github-community-guidelines>
> .
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/9572#issuecomment-760178500>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AM3GZM7FRFC7SZF7DBIGIZLSZ3SPHANCNFSM4WBHELJA>
> .
>
<|||||>To me this question can safely be marked as a bug, since if one trains a
finetune_trainer.py till some epochs and start retraining from the saved
checkpoints, this does not get the same accuracy as full training, can I
file a bug for this issue? thanks
On Sat, Jan 16, 2021 at 12:13 PM julia hane <[email protected]> wrote:
> Hi Thomas,
> I did received some response, but those response were not helpful, this
> needs a response from someone developed the codes, to know the small
> details which can help,
> please assist me with the issue, is there a way I could get a better help
> in forume? so someone with more knowledge see the question?
> thanks
>
> On Thu, Jan 14, 2021 at 1:54 PM Thomas Wolf <[email protected]>
> wrote:
>
>> Our github issue policy is detailed in the related ISSUE.md
>> <https://github.com/huggingface/transformers/blob/master/ISSUES.md> and
>> very clearly indicate we reserve the issue tracker for bugs and features
>> request which this issue is not, as well as several other issues you have
>> recently opened.
>>
>> Feel free to post here a link to the thread you should open on the forum
>> if you want to be visible in both location (though this should stay very
>> exceptional).
>>
>> The forum is NOT inactive, I see that you already have several answers to
>> your related post
>> <https://discuss.huggingface.co/t/training-models-for-smaller-epochs-and-then-continue-trianing/3153>.
>> This is the place for discussion. NOT here in the issues.
>>
>> Overall, please note that if you persist in not following the guidelines
>> and open-source collaboration policies that we have defined and shared with
>> the community on the repository in the CODE_OF_CONDUCT
>> <https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md>,
>> the CONTRIBUTING
>> <https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md>
>> and ISSUES
>> <https://github.com/huggingface/transformers/blob/master/ISSUES.md>
>> documents, we reserve the right to take the moderation actions advised by
>> GitHub in the Community guidelines
>> <https://docs.github.com/en/free-pro-team@latest/github/site-policy/github-community-guidelines>
>> .
>>
>> —
>> You are receiving this because you authored the thread.
>> Reply to this email directly, view it on GitHub
>> <https://github.com/huggingface/transformers/issues/9572#issuecomment-760178500>,
>> or unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/AM3GZM7FRFC7SZF7DBIGIZLSZ3SPHANCNFSM4WBHELJA>
>> .
>>
>
<|||||>The maintainers have very clearly labelled it as NOT a bug ([here](https://github.com/huggingface/transformers/issues/9572#issuecomment-760020852)).
What you are asking here is a free consultation which unfortunately we don't provide at HuggingFace so here is the next step I think are the most adapted in the present case:
- you've probably reached the limit of the amount of consulting the community could provide for free, the best would be now to hire a consultant to build a solution for you
- regarding the current issue and your usage of the open-source repository, my mission is now to step up and protect the maintainers from the reprository to be able to conduct their mission of maintaining the repository and not diverging from their task to conduct free consulting missions. As such this is now the second warning I send you to follow the guidelines and open-source collaboration policies that we have defined and shared with the community on the repository in the CODE_OF_CONDUCT, the CONTRIBUTING and ISSUES documents (see my message [here](https://github.com/huggingface/transformers/issues/9572#issuecomment-760178500)).
- Which lead us to the last point: if I have to spend more time and send you a third warning to use the repository as it was designed for the community I will have to limit your ability to open issues on our repositories following the GitHub Community guidelines as I explained to you in my message [here](https://github.com/huggingface/transformers/issues/9572#issuecomment-760178500). That would be the first time I have to do that with a community of thousand of people so please just use our forum tools like we designed them according to our guidelines stated [here](https://github.com/huggingface/transformers/issues/9572#issuecomment-760178500). |
transformers | 9,571 | closed | Tensorflow pretrained FlauBERT mixed precision error | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Python version: 3.7.6
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: Yes - GPU Tesla V100-SXM2-16GB, compute capability 7.0
- Using distributed or parallel set-up in script?: No
### Who can help
@jplu
## Information
Model I am using (Bert, XLNet ...): "jplu/tf-flaubert-small-cased"
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x ] my own task or dataset: text classification
## To reproduce
Steps to reproduce the behavior:
1. Set the dtype policies to mixed precision "float16" with tensorflow
2. Load pre-trained tensorflow flaubert model ("jplu/tf-flaubert-small-cased")
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
**After executing the following code:**
```python
from transformers import TFFlaubertModel
import tensorflow as tf
from tensorflow.keras.mixed_precision import experimental as mixed_precision
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
model_name = "jplu/tf-flaubert-small-cased"
model = TFFlaubertModel.from_pretrained(model_name)
```
**I got the following error:**
```
InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a half tensor but is a float tensor [Op:AddV2]
```
## Expected behavior
There should not be any problem. When I run "bert-base-case" pretrained model it works perfectly (the bottom code does not return any error)
```python
from transformers import TFBertModel
import tensorflow as tf
from tensorflow.keras.mixed_precision import experimental as mixed_precision
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
model_name = "bert-base-cased"
model = TFBertModel.from_pretrained(model_name)
```
Maybe there is an issue about hard-coded uses of float32 in FlauBERT ? and it is not fixed yet unlike other models ?
| 01-13-2021 17:25:47 | 01-13-2021 17:25:47 | Hello!
Unfortunately the TF models are not yet compliant with the "float16" mixed precision. This is our main goal for the next release (the one after 4.2.X) as we are actively working on this.
Sorry the the inconvenience. I will update this post once done. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,570 | closed | Compliancy with tf-nightly | # What does this PR do?
This PR makes the lib able to be used with the nightly builds of TensorFlow. And fix an issue with the min TensorFlow version. | 01-13-2021 16:48:45 | 01-13-2021 16:48:45 | Ok, just restored the previous version checking. |
transformers | 9,569 | closed | Add head_mask/decoder_head_mask for BART | This PR implement `head_mask` and `decoder_head_mask` for PyTorch BART-based models. The full list, please, see below:
- **BART**
- **MBart**
- **Blenderbot**
- **BlenderbotSmall**
- **Marian**
- **Pegasus**
This PR is a follow up on the closed PR #9404.
**Motivation**:
According to HuggingFace's websites "There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call “BERTology”)." This PR enables to mask attention heads in encoder and decoder models exactly like for BERT. This PR thus creates an opportunity to study the importance of attention heads in encoder-decoder BERT-like model.
**Description**
- New arguments `head_mask` and`decoder_head_mask` are passed to all the BART-based models `...Model`, `...ForConditionalGeneration` and `...ForQuestionAnswering` after four arguments `input_ids, attention_mask, decoder_input_ids, decoder_attention_mask` so that a testing and whole pipeline remains smooth.
- This PR also contains updated `test_headmasking`, which currently works fine with one problem - BART-based models do not satisfy a condition:
```
self.assertNotEqual(attentions[1][..., 0, :, :].flatten().sum().item(), 0.0).
```
Fixing this problem is currently underway.
**Reviewer:** @patrickvonplaten | 01-13-2021 16:22:37 | 01-13-2021 16:22:37 | Thanks for opening a new PR. Let me know if you need a review (It's also ok if I go into the PR and fix some things if your stuck :-) )<|||||>@patrickvonplaten I hope this PR is again ready for review. The only thing remaining to resolve is that issue in `test_headmasking` described above. Currently, I've been trying to fix this one, but I'll be grateful for sure if you can have a look at that too :)<|||||>Hey @patrickvonplaten. I would like to inform you I fixed `test_headmasking` for BART-based. The problem was that code inside
```
self.assertNotEqual(attentions[1][..., 0, :, :].flatten().sum().item(), 0.0)
```
pointed to the last layer of encoder/decoder (encoder-decoder models have only 2 layers in each module while BERT has 5 layers during testing). At the end of the day, this condition was invalid for BART-based models considering the `head_mask` to be
```
head_mask = torch.ones(
self.model_tester.num_hidden_layers,
self.model_tester.num_attention_heads,
device=torch_device,
)
head_mask[0, 0] = 0
head_mask[-1, :-1] = 0
```
I hope this PR is then ready for review. |
transformers | 9,568 | closed | pegasus fine-tune: TypeError: shift_tokens_right() missing 1 required positional argument: 'decoder_start_token_id' | ## Environment info
- `transformers` version: 4.2.0dev0
- Platform: Linux-3.10.0-1062.18.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@mfuntowicz
@sgugger
@patrickvonplaten
## Information
I am trying to do the fine-fune of pegasus in the summarization task of xsum dataset according to the instructions here.
## To reproduce
add more paramters in finetune_pegasus_xsum.sh
```
python finetune.py \
--gpus 0 \
--learning_rate=1e-4 \
--do_train \
--do_predict \
--n_val 1000 \
--val_check_interval 0.25 \
--max_source_length 512 --max_target_length 56 \
--freeze_embeds --label_smoothing 0.1 --adafactor --task summarization_xsum \
--model_name_or_path google/pegasus-xsum \
--output_dir=xsum_results \
--data_dir xsum \
--tokenizer_name google/pegasus-large \
"$@"
```
in the terminal:
```
(env) (base) [cheop.byeon@node01 seq2seq-distillation]$ sh finetune_pegasus_xsum.sh
/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: Checkpoint directory xsum_results exists and is not empty. With save_top_k=1, all files in this directory will be deleted when a checkpoint is saved!
warnings.warn(*args, **kwargs)
/home/cheop.byeon/env/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 10000). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last):
File "finetune.py", line 442, in <module>
main(args)
File "finetune.py", line 417, in main
logger=logger,
File "/home/cheop.byeon/transformers/examples/research_projects/seq2seq-distillation/lightning_base.py", line 389, in generic_train
trainer.fit(model)
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
results = self.accelerator_backend.train()
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/accelerators/cpu_accelerator.py", line 48, in train
results = self.train_or_test()
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 68, in train_or_test
results = self.trainer.train()
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 462, in train
self.run_sanity_check(self.get_model())
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in run_sanity_check
_, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches)
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 570, in run_evaluation
output = self.evaluation_loop.evaluation_step(test_mode, batch, batch_idx, dataloader_idx)
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 171, in evaluation_step
output = self.trainer.accelerator_backend.validation_step(args)
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/accelerators/cpu_accelerator.py", line 64, in validation_step
output = self.trainer.model.validation_step(*args)
File "finetune.py", line 182, in validation_step
return self._generative_step(batch)
File "finetune.py", line 226, in _generative_step
loss_tensors = self._step(batch)
File "finetune.py", line 145, in _step
decoder_input_ids = shift_tokens_right(tgt_ids, pad_token_id)
TypeError: shift_tokens_right() missing 1 required positional argument: 'decoder_start_token_id'
```
## Expected behavior
To make the finetune works in my environment.
| 01-13-2021 16:19:50 | 01-13-2021 16:19:50 | Hey @cheop-byeon,
we no longer actively maintain the `research_projects` folder ourselves. To solve your problem you can however just add
`model.config.decoder_start_token_id` as the third argument to the function.<|||||>Note that we recommend that you use the research project with it's proposed version being `pip install transformers==4.1.0`. We won't do actively maintain the code in `research_projects` anymore. |
transformers | 9,567 | closed | Switch metrics in run_ner to datasets | # What does this PR do?
This PR uses `datasets` to compute the metrics in the `run_ner` script. This allows us to grab the entity level metrics on top of the overall ones if we want them, which is controlled by the newly added flag `--return_entity_level_metrics`.
Fixes #9546
| 01-13-2021 15:28:10 | 01-13-2021 15:28:10 | |
transformers | 9,566 | closed | Fix data parallelism in Trainer | # What does this PR do?
A bug in data parallelism was introduced in #9451 (mostly because of some weird behavior of dataclasses in python) and data was... well not parallelized anymore (more like the batch size ended up divided by the number of GPUs).
This PR fixes that and to make sure it didn't break the behavior introduced in #9451 for model parallelism, adds a multiGPU test (passing locally) to ensure data is not parallelized when the model is parallel.
| 01-13-2021 14:34:44 | 01-13-2021 14:34:44 | |
transformers | 9,565 | closed | Make logs TF compliant | # What does this PR do?
Currently when TensorFlow model is run in graph mode, the logs are displayed as many time the method is called even if the confition is not respected. This is because in graph mode the logs are not compiled in the graph and then are displayed all the time. To fix this, we use now `tf.print` that compiles the message inside the graph and will be displayed only when the conditions are respected.
## Fixes issue
#9285 | 01-13-2021 12:59:00 | 01-13-2021 12:59:00 | Okey for me |
transformers | 9,564 | closed | Remove unused token_type_ids in MPNet | # What does this PR do?
This PR adds a warning when the argument `token_type_ids` is given a show a message saying that this argument is never used. I just supressed the internal appearance of this argument without modifying the method signatures in order to do not integrates a breaking change.
Should I update the Tokenizer to make it returns only `attention_mask`?
| 01-13-2021 11:39:56 | 01-13-2021 11:39:56 | I'm totally fine to definitely suppress this argument in once if this is prefered (I would prefer as well)<|||||>Not an expert neither. Maybe @LysandreJik knows better.<|||||>> Thanks for adapting @jplu!
>
> Not an expert on the tokenization part, but is the method `build_inputs_with_special_tokens` still necessary in that case? (It's in both tokenizers files.)
I think it's still required as it puts *e.g.* the [sep] token correctly between two sentences. I don't think that `build_inputs_with_special_tokens` necessarily has something to do with `token_type_ids` |
transformers | 9,563 | closed | finetune_trainer.py script is not using given config file | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1 (stable)
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
Trainer: @sgugger
examples/seq2seq: @patil-suraj
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises when using the official example scripts: `examples/seq2seq/finetune_trainer.py`
## Problem
When giving a local configuration file with `--config_name` the script first loads the config from the local files as expected, but then it loads a new configuration file from cache, which is not the one provided through the script's arguments:
```
2021-01-13 11:04:52.919133: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
01/13/2021 11:04:55 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: True
01/13/2021 11:04:55 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='/content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, model_parallel=False, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=3e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=10.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Jan13_11-04-55_a1d3ea40f4c6', logging_first_step=False, logging_steps=10, save_steps=1000, save_total_limit=3, no_cuda=False, seed=42, fp16=True, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model='rougeL', greater_is_better='True', ignore_data_skip=False, fp16_backend='auto', sharded_ddp=False, label_smoothing=0.1, sortish_sampler=True, predict_with_generate=True, adafactor=True, encoder_layerdrop=None, decoder_layerdrop=None, dropout=None, attention_dropout=None, lr_scheduler='linear')
[INFO|configuration_utils.py:429] 2021-01-13 11:04:55,952 >> loading configuration file /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_config/config.json
[INFO|configuration_utils.py:467] 2021-01-13 11:04:55,953 >> Model config BartConfig {
"_num_labels": 3,
"activation_dropout": 0.0,
"activation_function": "gelu",
"add_bias_logits": false,
"add_final_layer_norm": false,
"architectures": [
"BartForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 0,
"classif_dropout": 0.0,
"classifier_dropout": 0.0,
"d_model": 1024,
"decoder_attention_heads": 16,
"decoder_ffn_dim": 4096,
"decoder_layerdrop": 0.0,
"decoder_layers": 6,
"decoder_start_token_id": 2,
"do_blenderbot_90_layernorm": false,
"dropout": 0.1,
"early_stopping": true,
"encoder_attention_heads": 16,
"encoder_ffn_dim": 4096,
"encoder_layerdrop": 0.0,
"encoder_layers": 12,
"eos_token_id": 2,
"extra_pos_embeddings": 2,
"force_bos_token_to_be_generated": true,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"init_std": 0.02,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"max_length": 150,
"max_position_embeddings": 1024,
"min_length": 10,
"model_type": "bart",
"no_repeat_ngram_size": 5,
"normalize_before": false,
"normalize_embedding": true,
"num_beams": 4,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 1,
"prefix": " ",
"replacing_rate": 0,
"scale_embedding": false,
"static_position_embeddings": false,
"student_decoder_layers": null,
"student_encoder_layers": null,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 142,
"min_length": 56,
"no_repeat_ngram_size": 3,
"num_beams": 4
}
},
"use_cache": true,
"vocab_size": 50264
}
01/13/2021 11:04:56 - INFO - filelock - Lock 140709138970608 acquired on /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train/adac95cf641be69365b3dd7fe00d4114b3c7c77fb0572931db31a92d4995053b.9307b6cec4435559ec6e79d5a210a334b17706465329e138f335649d14f27e78.lock
[INFO|file_utils.py:1301] 2021-01-13 11:04:56,231 >> https://huggingface.co/sshleifer/distilbart-cnn-12-6/resolve/main/config.json not found in cache or force_download set to True, downloading to /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train/tmprjc9mj89
Downloading: 100% 1.62k/1.62k [00:00<00:00, 1.52MB/s]
[INFO|file_utils.py:1305] 2021-01-13 11:04:56,516 >> storing https://huggingface.co/sshleifer/distilbart-cnn-12-6/resolve/main/config.json in cache at /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train/adac95cf641be69365b3dd7fe00d4114b3c7c77fb0572931db31a92d4995053b.9307b6cec4435559ec6e79d5a210a334b17706465329e138f335649d14f27e78
[INFO|file_utils.py:1308] 2021-01-13 11:04:56,518 >> creating metadata file for /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train/adac95cf641be69365b3dd7fe00d4114b3c7c77fb0572931db31a92d4995053b.9307b6cec4435559ec6e79d5a210a334b17706465329e138f335649d14f27e78
01/13/2021 11:04:56 - INFO - filelock - Lock 140709138970608 released on /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train/adac95cf641be69365b3dd7fe00d4114b3c7c77fb0572931db31a92d4995053b.9307b6cec4435559ec6e79d5a210a334b17706465329e138f335649d14f27e78.lock
[INFO|configuration_utils.py:431] 2021-01-13 11:04:56,522 >> loading configuration file https://huggingface.co/sshleifer/distilbart-cnn-12-6/resolve/main/config.json from cache at /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train/adac95cf641be69365b3dd7fe00d4114b3c7c77fb0572931db31a92d4995053b.9307b6cec4435559ec6e79d5a210a334b17706465329e138f335649d14f27e78
[INFO|configuration_utils.py:467] 2021-01-13 11:04:56,523 >> Model config BartConfig {
"_num_labels": 3,
"activation_dropout": 0.0,
"activation_function": "gelu",
"add_bias_logits": false,
"add_final_layer_norm": false,
"architectures": [
"BartForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 0,
"classif_dropout": 0.0,
"classifier_dropout": 0.0,
"d_model": 1024,
"decoder_attention_heads": 16,
"decoder_ffn_dim": 4096,
"decoder_layerdrop": 0.0,
"decoder_layers": 6,
"decoder_start_token_id": 2,
"do_blenderbot_90_layernorm": false,
"dropout": 0.1,
"early_stopping": true,
"encoder_attention_heads": 16,
"encoder_ffn_dim": 4096,
"encoder_layerdrop": 0.0,
"encoder_layers": 12,
"eos_token_id": 2,
"extra_pos_embeddings": 2,
"force_bos_token_to_be_generated": true,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"init_std": 0.02,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"length_penalty": 2.0,
"max_length": 142,
"max_position_embeddings": 1024,
"min_length": 56,
"model_type": "bart",
"no_repeat_ngram_size": 3,
"normalize_before": false,
"normalize_embedding": true,
"num_beams": 4,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 1,
"prefix": " ",
"replacing_rate": 0,
"scale_embedding": false,
"static_position_embeddings": false,
"student_decoder_layers": null,
"student_encoder_layers": null,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 142,
"min_length": 56,
"no_repeat_ngram_size": 3,
"num_beams": 4
}
},
"use_cache": true,
"vocab_size": 50264
}
```
You can see for example that the `min_length` parameter is different in the second output, which is the default one and not the one provided by me. | 01-13-2021 11:38:16 | 01-13-2021 11:38:16 | Hi @marcoabrate
the script uses the provided `config` file, the reason you see `max_length` 56 because the script replaces the generate params (max/min_len etc) in `config` using `task_specific_params`
https://github.com/huggingface/transformers/blob/245cdb469d2a7f47316926fdbac925e0ed149332/examples/seq2seq/finetune_trainer.py#L216
Here in the `config`, `min_length` is 56 in `task_specific_params` so 10 get's changed to 50. <|||||>Thank you. I have managed to make it work with my configuration parameters, the problem was indeed the task specific params.
In any case, I think the `loading configuration file https://huggingface.co/sshleifer/distilbart-cnn-12-6/resolve/main/config.json from cache` message and the second config print are a bit misleading.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,562 | closed | Fix barthez tokenizer | The barthez tokenizer should be put in the "no config tokenizer", as two tokenizers with the same configs can't be put together.
Running the [following code](https://github.com/huggingface/transformers/issues/9422#issuecomment-759327863) works now:
```py
from transformers import AutoTokenizer
barthez_tokenizer = AutoTokenizer.from_pretrained("moussaKam/barthez")
``` | 01-13-2021 11:13:52 | 01-13-2021 11:13:52 | |
transformers | 9,561 | closed | Fix slow tests v4.2.0 | Fixes a bunch of slow tests that were failing. | 01-13-2021 10:36:29 | 01-13-2021 10:36:29 | |
transformers | 9,560 | closed | Adding Megatron models. | # 🌟 New model addition
Is it feasible to add Megatron models ? It seems the architecture is really just a GPT2, most of the work should be in creating the config, fusing layers from the available weights here: https://github.com/pytorch/fairseq/tree/master/examples/megatron_11b and making them available.
There are Nvidia's megatron (Bert and Gpt variants) and Facebook-11b megatron (gpt variant)
If we stick to that then we can't run the model on a single GPU, so we should probably make sure this is compatible with:
- https://github.com/huggingface/transformers/pull/9208
- https://github.com/huggingface/transformers/pull/9211
**Is keeping the current GPT2 architecture and using deepspeed's ZeRo and other parallelism schemes without touching original implementation feasible?**
## Model description
https://github.com/pytorch/fairseq/blob/e3c4282551e819853952284681e9ed60398c5c4a/examples/megatron_11b/README.md
<!-- Important information -->
## Open source status
* [x] the model implementation is available: https://github.com/ngoyal2707/Megatron-LM/blob/adb23324c222aad0aad89308e70302d996a5eaeb/mpu/transformer.py (Most of the work seems to be on Matrix parallelization)
* [x] the model weights are available: https://dl.fbaipublicfiles.com/fairseq/models/model_parallel/megatron_11b.tar.gz (Megatron 11b), https://github.com/NVIDIA/Megatron-LM#downloading-checkpoints (Nvidia's version, 3b and 8.3b don't seem to be available)
* [x] who are the authors: (mention them, if possible by @gh-username) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, Bryan Catanzaro https://arxiv.org/abs/1909.08053
https://developer.nvidia.com/blog/language-modeling-using-megatron-a100-gpu/
@stas00 @patrickvonplaten
| 01-13-2021 08:55:36 | 01-13-2021 08:55:36 | Since DeepSpeed both integrates and uses Megatron-LM almost everywhere in its tutorials it most likely should just work. Of course, the devil is in the detail.
As I haven't had a chance to study/work with GPT2 yet, I will let others comment on the more important part of your query.<|||||>Any plans of adding MegatronT5? (https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/t5_model.py)<|||||>As this is a really old thread, perhaps make a request in a new Issue, @jordiae?
And of course, if you're interested you're more than welcome to try and add it yourself. This is of course only an invitation.<|||||>> As this is a really old thread, perhaps make a request in a new Issue, @jordiae?
>
> And of course, if you're interested you're more than welcome to try and add it yourself. This is of course only an invitation.
Got it! Posted here because the issue was open. Thanks.<|||||>Will close this issue as it's really kind of outdated. |
transformers | 9,559 | closed | tokenizer decode method | <!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@mfuntowicz
## Information
When I learn the source code of the tokenizer method `decode`, I found a problem in the **line 719 of the file `tokenization_utils.py`**. In my view, the type of the variable `token` is `str`, while the type of the property `all_special_ids` is `List[int]`. Although this problem will not raise error, even has no effect to the `decode` method, I still think this is a case which is need to be fixed for us to understand.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I think it is required to replace `self.all_special_ids` with `self.all_special_tokens` in the **line 719 of the file `tokenization_utils.py`**. | 01-13-2021 08:49:57 | 01-13-2021 08:49:57 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,558 | closed | SMITH Google | # 🌟 New model addition
## Google´s SMITH Algorithm
## https://github.com/google-research/google-research/tree/master/smith
* [x] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them, if possible by @gh-username)
| 01-13-2021 07:51:23 | 01-13-2021 07:51:23 | This is a duplicate. See #9526 <|||||>Oh ok, thanks. |
transformers | 9,557 | closed | Speed up TopKLogitsWarper and TopPLogitsWarper (pytorch) | # What does this PR do?
Speeds up TopKLogitsWarper and TopPLogitsWarper using torch filling functions.
Here's a minimal example to reproduce the slow behavior (and test speed of improvements):
```
import torch
from transformers import TopPLogitsWarper, TopKLogitsWarper, LogitsWarper
import timeit
class TopKLogitsWarperNew(LogitsWarper):
r"""
:class:`transformers.LogitsWarper` that performs top-k, i.e. restricting to the k highest probability elements.
Args:
top_k (:obj:`int`):
The number of highest probability vocabulary tokens to keep for top-k-filtering.
filter_value (:obj:`float`, `optional`, defaults to :obj:`-float("Inf")`):
All filtered values will be set to this float value.
min_tokens_to_keep (:obj:`int`, `optional`, defaults to 1):
Minimum number of tokens that cannot be filtered.
"""
def __init__(self, top_k: int, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1):
if not isinstance(top_k, int) or top_k <= 0:
raise ValueError(f"`top_k` has to be a strictly positive integer, but is {top_k}")
self.top_k = top_k
self.filter_value = filter_value
self.min_tokens_to_keep = min_tokens_to_keep
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
top_k = min(max(self.top_k, self.min_tokens_to_keep), scores.size(-1)) # Safety check
# Remove all tokens with a probability less than the last token of the top-k
indices_to_remove = scores < torch.topk(scores, top_k)[0][..., -1, None]
scores = scores.masked_fill(indices_to_remove, self.filter_value) # changed here
return scores
class TopPLogitsWarperNew(LogitsWarper):
"""
:class:`transformers.LogitsWarper` that performs top-p, i.e. restricting to top tokens summing to prob_cut_off <=
prob_cut_off.
Args:
top_p (:obj:`float`):
If set to < 1, only the most probable tokens with probabilities that add up to :obj:`top_p` or higher are
kept for generation.
filter_value (:obj:`float`, `optional`, defaults to :obj:`-float("Inf")`):
All filtered values will be set to this float value.
min_tokens_to_keep (:obj:`int`, `optional`, defaults to 1):
Minimum number of tokens that cannot be filtered.
"""
def __init__(self, top_p: float, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1):
if not isinstance(top_p, float) or (top_p < 0 or top_p > 1.0):
raise ValueError(f"`top_p` has to be a float > 0 and < 1, but is {top_p}")
self.top_p = top_p
self.filter_value = filter_value
self.min_tokens_to_keep = min_tokens_to_keep
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
sorted_logits, sorted_indices = torch.sort(scores, descending=True)
cumulative_probs = sorted_logits.softmax(dim=-1).cumsum(dim=-1) # changed here
# Remove tokens with cumulative top_p above the threshold (token with 0 are kept)
sorted_indices_to_remove = cumulative_probs > self.top_p
if self.min_tokens_to_keep > 1:
# Keep at least min_tokens_to_keep (set to min_tokens_to_keep-1 because we add the first one below)
sorted_indices_to_remove[..., : self.min_tokens_to_keep - 1] = 0
# Shift the indices to the right to keep also the first token above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
# scatter sorted tensors to original indexing
indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove)
scores = scores.masked_fill(indices_to_remove, self.filter_value) # changed here
return scores
input_ids = torch.randint(0, 10000, (256, 256))
scores = torch.randn(256, 10000)
top_k_lw = TopKLogitsWarper(100)
top_p_lw = TopPLogitsWarper(0.95)
top_k_lw_new = TopKLogitsWarperNew(100)
top_p_lw_new = TopPLogitsWarperNew(0.95)
print(f"Existing top_k impl time for 100 iterations on CPU = {timeit.timeit(lambda: top_k_lw(input_ids, scores), number=100)}")
print(f"Proposed top_k impl time for 100 iterations on CPU = {timeit.timeit(lambda: top_k_lw_new(input_ids, scores), number=100)}")
print(f"Existing top_p impl time for 100 iterations on CPU = {timeit.timeit(lambda: top_p_lw(input_ids, scores), number=100)}")
print(f"Proposed top_p impl time for 100 iterations on CPU = {timeit.timeit(lambda: top_p_lw_new(input_ids, scores), number=100)}")
if torch.cuda.is_available():
input_ids = input_ids.cuda()
scores = scores.cuda()
print(f"Existing top_k impl time for 100 iterations on GPU = {timeit.timeit(lambda: top_k_lw(input_ids, scores), number=100)}")
print(f"Proposed top_k impl time for 100 iterations on GPU = {timeit.timeit(lambda: top_k_lw_new(input_ids, scores), number=100)}")
print(f"Existing top_p impl time for 100 iterations on GPU = {timeit.timeit(lambda: top_p_lw(input_ids, scores), number=100)}")
print(f"Proposed top_p impl time for 100 iterations on GPU = {timeit.timeit(lambda: top_p_lw_new(input_ids, scores), number=100)}")
```
Timings reported:
```
Existing top_k impl time for 100 iterations on CPU = 2.5527561419994527
Proposed top_k impl time for 100 iterations on CPU = 0.36601612999947974
Existing top_p impl time for 100 iterations on CPU = 6.4072540179995485
Proposed top_p impl time for 100 iterations on CPU = 4.1470332960007
Existing top_k impl time for 100 iterations on GPU = 0.09082965299967327
Proposed top_k impl time for 100 iterations on GPU = 0.008193381999262783
Existing top_p impl time for 100 iterations on GPU = 1.1027910299999348
Proposed top_p impl time for 100 iterations on GPU = 0.9008321309993335
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
-->
| 01-13-2021 07:24:57 | 01-13-2021 07:24:57 | This looks great @LSinev! <|||||>Ok, looks good to merge. I checked that your implementation works with Pytorch 1.4 as well |
transformers | 9,556 | closed | Where is convert_bert_original_tf_checkpoint_to_pytorch.py? | HI:
I am getting the following error when implementing entity extraction in BERT. OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index']
I am very new to using BERT, and noted that [issue 2110](https://github.com/huggingface/transformers/issues/2110) had a similar issue. Issue 2110 was referred to the convert_bert_original_tf_checkpoint_to_pytorch.py file. However, the current link isn't working. Could you point me to its current location?
V/r,
L | 01-13-2021 02:49:48 | 01-13-2021 02:49:48 | Its current location is [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py).<|||||>Hi @sednaasil, what are you trying to do? Could you show the code that you're using so that we may help you debug it? Thanks.<|||||>Hi!
I downloaded the uncased_L-12_H-768_A-12 BERT model to create an entity extraction tool following this [method](https://github.com/abhishekkrthakur/bert-entity-extraction). The model I downloaded did not include 'pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index' files, resulting in an error implementing my model. I saw a previous post where the files I needed were generated with the convert_bert_original_tf_checkpoint_to_pytorch.py file; however the link to the file was broken. Is this the correct way to proceed?
<|||||>Where did you download your model from? Is something preventing you from using `bert-base-cased`?
```py
from transformers import BertModel
model = BertModel.from_pretrained("bert-base-cased")
```<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,555 | closed | DPRReaderTokenizer does not generate the attention_mask properly | Hello,
It seems like the DPRReaderTokenizer does not generate the `attention_mask` properly.
Steps to reproduce on the master branch
```bash
(venv) sergey_mkrtchyan test (master) $ python
Python 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39)
[Clang 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import DPRReaderTokenizer, DPRReader
>>> tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
>>> model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base')
>>> encoded_inputs = tokenizer(questions="What is love ?",
... titles="Haddaway",
... texts="What Is Love is a song recorded by the artist Haddaway",
... padding=True,
... return_tensors='pt')
>>> encoded_inputs
{'input_ids': tensor([[ 101, 2054, 2003, 2293, 1029, 102, 2018, 2850, 4576, 102, 2054, 2003,
2293, 2003, 1037, 2299, 2680, 2011, 1996, 3063, 2018, 2850, 4576]]), 'attention_mask': tensor([True])}
```
Notice the `attention_mask` above is incorrect. It should have the same shape as the `input_ids` tensor.
## Environment info
- `transformers` version: 4.2.0dev0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Git blame says @lhoestq and @LysandreJik might be able to help :)
I believe the issue is in this part of the code
https://github.com/huggingface/transformers/blob/5f6721032af46cf491fe69c010805f8786bf63a1/src/transformers/models/dpr/tokenization_dpr.py#L254
(same thing for the fast tokenizer)
I fixed it locally by replacing the above line with
```Python
attention_mask = []
for input_ids in encoded_inputs["input_ids"]:
attention_mask.append([int(input_id != self.pad_token_id) for input_id in input_ids])
```
I am happy to submit a PR if that looks reasonable to you.
| 01-13-2021 02:47:38 | 01-13-2021 02:47:38 | Indeed, it doesn't! We would gladly welcome a PR!<|||||>Closed by #9663 :) |
transformers | 9,554 | closed | Fix classification script: enable dynamic padding with truncation | # What does this PR do?
To fix the issue (below) in the run_glue.pl script, tokenizer's `max_length` value is now assigned directly with `max_seq_length` argument. Now it is possible to truncate the sequence and use dynamic padding. By default `max_length` is 128, which means truncation to 128 tokens. To disable truncation, set `max_length = None`.
Fixes #9551
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/9551#issue-784679542
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | 01-13-2021 01:19:33 | 01-13-2021 01:19:33 | |
transformers | 9,553 | closed | [setup.py] note on how to get to transformers exact dependencies from shell | As a follow up to #9550, this PR adds a few handy one liners to quickly access the correct dependency versions from shell.
e.g if you want to install deps for a group of packages we control and with their correct versions, you just need to run:
```
pip install -U $(python -c 'import sys; from transformers.dependency_versions_table import deps; \
print(" ".join([deps[x] for x in sys.argv[1:]]))' numpy filelock protobuf requests tqdm regex \
sentencepiece sacremoses tokenizers packaging importlib_metadata)
```
that was one option for torchhub, but since it didn't have `transformers` installed it didn't work and a different solution was provided https://github.com/huggingface/transformers/pull/9552
@sgugger, @LysandreJik | 01-13-2021 00:26:09 | 01-13-2021 00:26:09 | |
transformers | 9,552 | closed | [CI] use correct deps for torchhub | As a follow up to https://github.com/huggingface/transformers/pull/9550, here is a clean solution that requires only one source (`setup.py`) to edit for dependencies and groups of thereof.
The PR
1. defines a new dependency group for `torchhub` in `setup.py`
2. installs the exact dependencies of that group inside .github workflow
3. uninstalls `transformers` since Sylvain said it shouldn't be there, but it had to be installed to get the deps easily.
Of course, it'll need to be merged first for:
```
pip install -e git+https://github.com/huggingface/transformers.git#egg=transformers[torchhub]
```
to work, since it's not there now... meanwhile you can test it from this branch:
```
pip install -e git+https://github.com/stas00/transformers.git@torchhub-deps#egg=transformers[torchhub]
```
-------------
Alternatively to:
```
pip install -e git+https://github.com/huggingface/transformers.git#egg=transformers[torchhub]
pip uninstall -y transformers
```
we can do:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[torchhub]
pip uninstall -y transformers
```
for a few fractions of seconds faster.
----------------
Yet, another approach it to extend `setup.py` with what I created here a few year back:
https://github.com/fastai/fastai1/blob/a8327427ad5137c4899a1b4f74745193c9ea5be3/setup.py#L11-L22
This then:
```
python setup.py -q deps --dep-groups=torchhub
```
would dump the dependencies just for the specified extra groups, which can then be fed to `pip install`, so there will be no need to install the main package. Literally, the above command would just dump `extras["torchhub"]` in this case.
--------------
Finally, we could make `src/transformers/dependency_versions_table.py` contain the full dependency groups as well, and again then it'll just be needed to get one's hands on that file to extract groups of dependencies, e.g.:
```
wget https://raw.githubusercontent.com/huggingface/transformers/master/src/transformers/dependency_versions_table.py
python -c 'import sys; from dependency_versions_table import dep_group; print(dep_group[sys.argv[1]])' torchhub
```
this is hypothetical since we don't currently have `dep_group` dict in `dependency_versions_table.py`.
@sgugger, @LysandreJik | 01-13-2021 00:13:46 | 01-13-2021 00:13:46 | Is there a way to leverage this to also update the dependencies in [`hubconf.py`](https://github.com/huggingface/transformers/blob/master/hubconf.py)?<|||||>From what I understand, you need to install the dependencies by hand before, so what would it add to have this in hubconf? From what I gathered this list of "dependencies" is just there to be dynamically imported when executing the code to import the model, but it's not doing anything for the install.<|||||>> From what I understand, you need to install the dependencies by hand before, so what would it add to have this in hubconf? From what I gathered this list of "dependencies" is just there to be dynamically imported when executing the code to import the model, but it's not doing anything for the install.
What Sylvain said.
Moreover we validated that yesterday, since trying to add specific versions in `hubconf.py` made no difference. So that var should have been called "imports" to be more precise.
|
transformers | 9,551 | closed | Dynamic padding + truncation in classification script | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
@VictorSanh
@sgugger
It seems that it is not possible to use dynamic padding along with truncation in classification script, because when `tokenizer` gets `max_length = None` it just skips truncation.
https://github.com/huggingface/transformers/blob/063d8d27f4e1d089dc76f22e378b86b219167e3b/examples/text-classification/run_glue.py#L290
On the other hand, in language modeling script it works.
https://github.com/huggingface/transformers/blob/063d8d27f4e1d089dc76f22e378b86b219167e3b/examples/language-modeling/run_mlm.py#L311
| 01-12-2021 23:56:53 | 01-12-2021 23:56:53 | Yes, the `max_length` should be passed the same way. Would like to open a PR to fix `run_glue.py`?<|||||>Okay, I will do that. |
transformers | 9,550 | closed | Use the right version of tokenizers | # What does this PR do?
Pulls the version of tokenziers from our deps in the `hubconf.py` otherwise it might install a version of tokenizers that is more recent (if available on pypi). When that is the case, the check of our packages fails at import. | 01-12-2021 23:14:57 | 01-12-2021 23:14:57 | Test passes, so merging to get the CI green. |
transformers | 9,549 | closed | Use the right version of tokenizers for torchhub | # What does this PR do?
The hubconf.py is using `tokenizers` without checking the version transformers needs, which yields to an import error if a more recent version of tokenziers is available on pypi (like right now). | 01-12-2021 23:11:38 | 01-12-2021 23:11:38 | Branched from my last PR and not master... |
transformers | 9,548 | closed | Quick tour runs into OOM on Colab | ## Environment info
Google Colab
### Who can help
@jplu @LysandreJik @sgugger
## Information
Followed the quick tour using a Colab notebook https://huggingface.co/docs/datasets/quicktour.html#fine-tuning-a-deep-learning-model
Colab Runtime type: GPU
But the training process runs into OOM
```
---------------------------------------------------------------------------
ResourceExhaustedError Traceback (most recent call last)
<ipython-input-7-cb1582039f5e> in <module>()
2 opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
3 model.compile(optimizer=opt, loss=loss_fn, metrics=["accuracy"])
----> 4 model.fit(tfdataset, epochs=3)
6 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
58 ctx.ensure_initialized()
59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
62 if name is not None:
ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[32,512,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node tf_bert_for_sequence_classification/bert/encoder/layer_._4/attention/output/LayerNorm/batchnorm/mul_1 (defined at /usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_tf_bert.py:327) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[gradient_tape/tf_bert_for_sequence_classification/bert/embeddings/position_embeddings/embedding_lookup/Reshape/_532]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[32,512,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node tf_bert_for_sequence_classification/bert/encoder/layer_._4/attention/output/LayerNorm/batchnorm/mul_1 (defined at /usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_tf_bert.py:327) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_24759]
Function call stack:
train_function -> train_function
```
## To reproduce
Steps to reproduce the behavior:
The steps to reproduce are in the public accessible Colab notebook
[https://colab.research.google.com/drive/1Q3tBx57f2A8Hn1D7IXS-1nKandB86S3f](https://colab.research.google.com/drive/1Q3tBx57f2A8Hn1D7IXS-1nKandB86S3f)
## Expected behavior
Sample runs
| 01-12-2021 21:14:25 | 01-12-2021 21:14:25 | If you don't have any memory left, you should use a lower batch size. In the line:
```
>>> tfdataset = tf.data.Dataset.from_tensor_slices((features, dataset["labels"])).batch(32)
```
replace 32 by something lower.
Also, please use the [forums](https://discuss.huggingface.co/) for this kind of questions. |
transformers | 9,547 | closed | Fine-tuning LMwithNSP | when i fine tuning bert using BertForPreTraining i got error here --> outputs = model(input_ids=input_ids, attention_mask=input_mask, token_type_ids=segment_ids,
labels=lm_label_ids, next_sentence_label=is_next)
and in this line also loss.backward()
i got this error RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle) and this error occur when shape of prediction not equal labels but i checked shape like this len(prediction_logits) == len(lm_label_ids) and same shape
what is the problem ?
| 01-12-2021 20:59:46 | 01-12-2021 20:59:46 | Hi, could you please provide everything asked in the issue template? Information relative to your environment as well as the code that triggered the error. Thanks.<|||||>environment transformers 4.0.0 and my code below and message error you can get it after code
```py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import logging
import argparse
from tqdm import tqdm, trange
import numpy as np
import torch
from torch.utils.data import DataLoader, RandomSampler , SequentialSampler
from torch.utils.data.distributed import DistributedSampler
#from pytorch_pretrained_bert.tokenization import BertTokenizer
#from pytorch_pretrained_bert.modeling import BertForPreTraining
from transformers import BertTokenizer, BertForPreTraining
#from pytorch_pretrained_bert.optimization import BertAdam
from transformers import XLNetTokenizer
from transformers import AdamW, get_linear_schedule_with_warmup
#from transformers import BertForPreTraining
import sentencepiece as spm
from torch.utils.data import Dataset
import random
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt='%m/%d/%Y %H:%M:%S',
level=logging.INFO)
logger = logging.getLogger(__name__)
def warmup_linear(x, warmup=0.002):
if x < warmup:
return x / warmup
return 1.0 - x
def accuracy(out, labels, total_test):
class_preds = out.data.cpu().numpy().argmax(axis=-1)
labels = labels.data.cpu().numpy()
return np.sum(class_preds == labels) / total_test
class BERTDataset(Dataset):
def __init__(self, corpus_path, tokenizer, seq_len, encoding="utf-8", corpus_lines=None, on_memory=True):
self.vocab = tokenizer.vocab
self.tokenizer = tokenizer
self.seq_len = seq_len
self.on_memory = on_memory
self.corpus_lines = corpus_lines # number of non-empty lines in input corpus
self.corpus_path = corpus_path
self.encoding = encoding
self.current_doc = 0 # to avoid random sentence from same doc
# for loading samples directly from file
self.sample_counter = 0 # used to keep track of full epochs on file
self.line_buffer = None # keep second sentence of a pair in memory and use as first sentence in next pair
# for loading samples in memory
self.current_random_doc = 0
self.num_docs = 0
self.sample_to_doc = [] # map sample index to doc and line
# load samples into memory
if on_memory:
self.all_docs = []
doc = []
self.corpus_lines = 0
with open(corpus_path, "r", encoding=encoding) as f:
for line in tqdm(f, desc="Loading Dataset", total=corpus_lines):
line = line.strip()
if line == "":
self.all_docs.append(doc)
doc = []
# remove last added sample because there won't be a subsequent line anymore in the doc
self.sample_to_doc.pop()
else:
# store as one sample
sample = {"doc_id": len(self.all_docs),
"line": len(doc)}
self.sample_to_doc.append(sample)
doc.append(line)
self.corpus_lines = self.corpus_lines + 1
# if last row in file is not empty
if self.all_docs[-1] != doc:
self.all_docs.append(doc)
self.sample_to_doc.pop()
self.num_docs = len(self.all_docs)
# load samples later lazily from disk
else:
if self.corpus_lines is None:
with open(corpus_path, "r", encoding=encoding) as f:
self.corpus_lines = 0
for line in tqdm(f, desc="Loading Dataset", total=corpus_lines):
if line.strip() == "":
self.num_docs += 1
else:
self.corpus_lines += 1
# if doc does not end with empty line
if line.strip() != "":
self.num_docs += 1
self.file = open(corpus_path, "r", encoding=encoding)
self.random_file = open(corpus_path, "r", encoding=encoding)
def __len__(self):
# last line of doc won't be used, because there's no "nextSentence". Additionally, we start counting at 0.
return self.corpus_lines - self.num_docs - 1
def __getitem__(self, item):
cur_id = self.sample_counter
self.sample_counter += 1
if not self.on_memory:
# after one epoch we start again from beginning of file
if cur_id != 0 and (cur_id % len(self) == 0):
self.file.close()
self.file = open(self.corpus_path, "r", encoding=self.encoding)
t1, t2, is_next_label = self.random_sent(item)
# tokenize
tokens_a = self.tokenizer.tokenize(t1)
tokens_b = self.tokenizer.tokenize(t2)
# combine to one sample
cur_example = InputExample(guid=cur_id, tokens_a=tokens_a, tokens_b=tokens_b, is_next=is_next_label)
# transform sample to features
cur_features = convert_example_to_features(cur_example, self.seq_len, self.tokenizer)
cur_tensors = (torch.tensor(cur_features.input_ids),
torch.tensor(cur_features.input_mask),
torch.tensor(cur_features.segment_ids),
torch.tensor(cur_features.lm_label_ids),
torch.tensor(cur_features.is_next))
return cur_tensors
def random_sent(self, index):
"""
Get one sample from corpus consisting of two sentences. With prob. 50% these are two subsequent sentences
from one doc. With 50% the second sentence will be a random one from another doc.
:param index: int, index of sample.
:return: (str, str, int), sentence 1, sentence 2, isNextSentence Label
"""
t1, t2 = self.get_corpus_line(index)
if random.random() > 0.5:
label = 0
else:
t2 = self.get_random_line()
label = 1
assert len(t1) > 0
assert len(t2) > 0
return t1, t2, label
def get_corpus_line(self, item):
"""
Get one sample from corpus consisting of a pair of two subsequent lines from the same doc.
:param item: int, index of sample.
:return: (str, str), two subsequent sentences from corpus
"""
t1 = ""
t2 = ""
assert item < self.corpus_lines
if self.on_memory:
sample = self.sample_to_doc[item]
t1 = self.all_docs[sample["doc_id"]][sample["line"]]
t2 = self.all_docs[sample["doc_id"]][sample["line"] + 1]
# used later to avoid random nextSentence from same doc
self.current_doc = sample["doc_id"]
return t1, t2
else:
if self.line_buffer is None:
# read first non-empty line of file
while t1 == "":
t1 = self.file.__next__().strip()
t2 = self.file.__next__().strip()
else:
# use t2 from previous iteration as new t1
t1 = self.line_buffer
t2 = self.file.__next__().strip()
# skip empty rows that are used for separating documents and keep track of current doc id
while t2 == "" or t1 == "":
t1 = self.file.__next__().strip()
t2 = self.file.__next__().strip()
self.current_doc = self.current_doc + 1
self.line_buffer = t2
assert t1 != ""
assert t2 != ""
return t1, t2
def get_random_line(self):
"""
Get random line from another document for nextSentence task.
:return: str, content of one line
"""
# Similar to original tf repo: This outer loop should rarely go for more than one iteration for large
# corpora. However, just to be careful, we try to make sure that
# the random document is not the same as the document we're processing.
for _ in range(10):
if self.on_memory:
rand_doc_idx = random.randint(0, len(self.all_docs) - 1)
rand_doc = self.all_docs[rand_doc_idx]
line = rand_doc[random.randrange(len(rand_doc))]
else:
rand_index = random.randint(1, self.corpus_lines if self.corpus_lines < 1000 else 1000)
# pick random line
for _ in range(rand_index):
line = self.get_next_line()
# check if our picked random line is really from another doc like we want it to be
if self.current_random_doc != self.current_doc:
break
return line
def get_next_line(self):
""" Gets next line of random_file and starts over when reaching end of file"""
try:
line = self.random_file.__next__().strip()
# keep track of which document we are currently looking at to later avoid having the same doc as t1
if line == "":
self.current_random_doc = self.current_random_doc + 1
line = self.random_file.__next__().strip()
except StopIteration:
self.random_file.close()
self.random_file = open(self.corpus_path, "r", encoding=self.encoding)
line = self.random_file.__next__().strip()
return line
class InputExample(object):
"""A single training/test example for the language model."""
def __init__(self, guid, tokens_a, tokens_b=None, is_next=None, lm_labels=None):
"""Constructs a InputExample.
Args:
guid: Unique id for the example.
tokens_a: string. The untokenized text of the first sequence. For single
sequence tasks, only this sequence must be specified.
tokens_b: (Optional) string. The untokenized text of the second sequence.
Only must be specified for sequence pair tasks.
label: (Optional) string. The label of the example. This should be
specified for train and dev examples, but not for test examples.
"""
self.guid = guid
self.tokens_a = tokens_a
self.tokens_b = tokens_b
self.is_next = is_next # nextSentence
self.lm_labels = lm_labels # masked words for language model
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self, input_ids, input_mask, segment_ids, is_next, lm_label_ids):
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.is_next = is_next
self.lm_label_ids = lm_label_ids
def random_word(tokens, tokenizer):
"""
Masking some random tokens for Language Model task with probabilities as in the original BERT paper.
:param tokens: list of str, tokenized sentence.
:param tokenizer: Tokenizer, object used for tokenization (we need it's vocab here)
:return: (list of str, list of int), masked tokens and related labels for LM prediction
"""
output_label = []
for i, token in enumerate(tokens):
prob = random.random()
# mask token with 15% probability
if prob < 0.15:
prob /= 0.15
# 80% randomly change token to mask token
if prob < 0.8:
tokens[i] = "[MASK]"
# 10% randomly change token to random token
elif prob < 0.9:
tokens[i] = random.choice(list(tokenizer.vocab.items()))[0]
# -> rest 10% randomly keep current token
# append current token to output (we will predict these later)
try:
output_label.append(tokenizer.vocab[token])
except KeyError:
# For unknown words (should not occur with BPE vocab)
output_label.append(tokenizer.vocab["[UNK]"])
logger.warning("Cannot find token '{}' in vocab. Using [UNK] insetad".format(token))
else:
# no masking token (will be ignored by loss function later)
output_label.append(-1)
return tokens, output_label
def convert_example_to_features(example, max_seq_length, tokenizer):
"""
Convert a raw sample (pair of sentences as tokenized strings) into a proper training sample with
IDs, LM labels, input_mask, CLS and SEP tokens etc.
:param example: InputExample, containing sentence input as strings and is_next label
:param max_seq_length: int, maximum length of sequence.
:param tokenizer: Tokenizer
:return: InputFeatures, containing all inputs and labels of one sample as IDs (as used for model training)
"""
tokens_a = example.tokens_a
tokens_b = example.tokens_b
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
_truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
t1_random, t1_label = random_word(tokens_a, tokenizer)
t2_random, t2_label = random_word(tokens_b, tokenizer)
# concatenate lm labels and account for CLS, SEP, SEP
lm_label_ids = ([-1] + t1_label + [-1] + t2_label + [-1])
# The convention in BERT is:
# (a) For sequence pairs:
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
# (b) For single sequences:
# tokens: [CLS] the dog is hairy . [SEP]
# type_ids: 0 0 0 0 0 0 0
#
# Where "type_ids" are used to indicate whether this is the first
# sequence or the second sequence. The embedding vectors for `type=0` and
# `type=1` were learned during pre-training and are added to the wordpiece
# embedding vector (and position vector). This is not *strictly* necessary
# since the [SEP] token unambigiously separates the sequences, but it makes
# it easier for the model to learn the concept of sequences.
#
# For classification tasks, the first vector (corresponding to [CLS]) is
# used as as the "sentence vector". Note that this only makes sense because
# the entire model is fine-tuned.
tokens = []
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in tokens_a:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
assert len(tokens_b) > 0
for token in tokens_b:
tokens.append(token)
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
lm_label_ids.append(-1)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
assert len(lm_label_ids) == max_seq_length
if example.guid < 5:
logger.info("*** Example ***")
logger.info("guid: %s" % (example.guid))
logger.info("tokens: %s" % " ".join(
[str(x) for x in tokens]))
logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
logger.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
logger.info(
"segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
logger.info("LM label: %s " % (lm_label_ids))
logger.info("Is next sentence label: %s " % (example.is_next))
features = InputFeatures(input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
lm_label_ids=lm_label_ids,
is_next=example.is_next)
return features
def main():
parser = argparse.ArgumentParser()
## Required parameters
parser.add_argument("--train_file",
default=None,
type=str,
required=True,
help="The input train corpus.")
parser.add_argument("--test_file",
default=None,
type=str,
required=True,
help="The input test corpus.")
parser.add_argument("--tokenizer_model", default=None, type=str, required=True,
help="tokenizer pre-trained model selected in the list: bert-base-uncased, "
"bert-large-uncased, bert-base-cased, bert-base-multilingual, bert-base-chinese.")
parser.add_argument("--bert_model", default=None, type=str, required=True,
help="Bert pre-trained model selected in the list: bert-base-uncased, "
"bert-large-uncased, bert-base-cased, bert-base-multilingual, bert-base-chinese.")
parser.add_argument("--config_file", default=None, type=str, required=True,
help="Bert pre-trained model selected in the list: bert-base-uncased, "
"bert-large-uncased, bert-base-cased, bert-base-multilingual, bert-base-chinese.")
parser.add_argument("--output_dir",
default=None,
type=str,
required=True,
help="The output directory where the model checkpoints will be written.")
## Other parameters
parser.add_argument("--max_seq_length",
default=128,
type=int,
help="The maximum total input sequence length after WordPiece tokenization. \n"
"Sequences longer than this will be truncated, and sequences shorter \n"
"than this will be padded.")
parser.add_argument("--train_batch_size",
default=32,
type=int,
help="Total batch size for training.")
parser.add_argument("--eval_batch_size",
default=32,
type=int,
help="Total batch size for eval.")
parser.add_argument("--learning_rate",
default=5e-5,
type=float,
help="The initial learning rate for Adam.")
parser.add_argument("--num_train_epochs",
default=4,
type=float,
help="Total number of training epochs to perform.")
parser.add_argument("--adam_epsilon",
default=1e-8,
type=float,
help="Proportion of training to perform linear learning rate warmup for. "
"E.g., 0.1 = 10%% of training.")
parser.add_argument("--no_cuda",
action='store_true',
help="Whether not to use CUDA when available")
parser.add_argument("--on_memory",
action='store_true',
help="Whether to load train samples into memory or use disk")
parser.add_argument("--do_lower_case",
action='store_true',
help="Whether to lower case the input text. True for uncased models, False for cased models.")
parser.add_argument("--local_rank",
type=int,
default=-1,
help="local_rank for distributed training on gpus")
parser.add_argument('--seed',
type=int,
default=42,
help="random seed for initialization")
parser.add_argument('--gradient_accumulation_steps',
type=int,
default=1,
help="Number of updates steps to accumualte before performing a backward/update pass.")
parser.add_argument('--fp16',
action='store_true',
help="Whether to use 16-bit float precision instead of 32-bit")
parser.add_argument('--loss_scale',
type=float, default=0,
help="Loss scaling to improve fp16 numeric stability. Only used when fp16 set to True.\n"
"0 (default value): dynamic loss scaling.\n"
"Positive power of 2: static loss scaling value.\n")
args = parser.parse_args()
if args.local_rank == -1 or args.no_cuda:
device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
n_gpu = torch.cuda.device_count()
else:
torch.cuda.set_device(args.local_rank)
device = torch.device("cuda", args.local_rank)
n_gpu = 1
# Initializes the distributed backend which will take care of sychronizing nodes/GPUs
torch.distributed.init_process_group(backend='nccl')
logger.info("device: {} n_gpu: {}, distributed training: {}, 16-bits training: {}".format(
device, n_gpu, bool(args.local_rank != -1), args.fp16))
if args.gradient_accumulation_steps < 1:
raise ValueError("Invalid gradient_accumulation_steps parameter: {}, should be >= 1".format(
args.gradient_accumulation_steps))
args.train_batch_size = int(args.train_batch_size / args.gradient_accumulation_steps)
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
#if not args.do_train and not args.do_eval:
# raise ValueError("At least one of `do_train` or `do_eval` must be True.")
if os.path.exists(args.output_dir) and os.listdir(args.output_dir):
raise ValueError("Output directory ({}) already exists and is not empty.".format(args.output_dir))
os.makedirs(args.output_dir, exist_ok=True)
# tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case)
#tokenizer = XLNetTokenizer.from_pretrained(args.tokenizer_model)
tokenizer = BertTokenizer.from_pretrained(args.tokenizer_model, do_lower_case=False)
# train_examples = None
num_train_steps = None
print("Loading Train Dataset", args.train_file)
train_dataset = BERTDataset(args.train_file, tokenizer, seq_len=args.max_seq_length,
corpus_lines=None, on_memory=args.on_memory)
print("Loading eval Dataset", args.test_file)
eval_dataset = BERTDataset(args.test_file, tokenizer, seq_len=args.max_seq_length,
corpus_lines=None, on_memory=args.on_memory)
num_train_steps = int(
len(train_dataset) / args.train_batch_size / args.gradient_accumulation_steps * args.num_train_epochs)
# Prepare model
model = BertForPreTraining.from_pretrained(
args.bert_model,
output_attentions=False,
output_hidden_states=False,)
model.to(device)
if args.fp16:
model.half()
if args.local_rank != -1:
try:
from apex.parallel import DistributedDataParallel as DDP
except ImportError:
raise ImportError(
"Please install apex from https://www.github.com/nvidia/apex to use distributed and fp16 training.")
model = DDP(model)
elif n_gpu > 1:
model = torch.nn.DataParallel(model)
# Prepare optimizer
'''
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
if args.fp16:
try:
from apex.optimizers import FP16_Optimizer
from apex.optimizers import FusedAdam
except ImportError:
raise ImportError(
"Please install apex from https://www.github.com/nvidia/apex to use distributed and fp16 training.")
optimizer = FusedAdam(optimizer_grouped_parameters,
lr=args.learning_rate,
bias_correction=False,
max_grad_norm=1.0)
if args.loss_scale == 0:
optimizer = FP16_Optimizer(optimizer, dynamic_loss_scale=True)
else:
optimizer = FP16_Optimizer(optimizer, static_loss_scale=args.loss_scale)
else:
optimizer = AdamW(optimizer_grouped_parameters,
lr=args.learning_rate,
warmup=args.warmup_proportion,
t_total=num_train_steps)
'''
#global_step = 0
logger.info("***** Running training *****")
logger.info(" Num examples = %d", len(train_dataset))
logger.info(" Batch size = %d", args.train_batch_size)
logger.info(" Num steps = %d", num_train_steps)
if args.local_rank == -1:
train_sampler = SequentialSampler(train_dataset)
eval_sampler = SequentialSampler(eval_dataset)
else:
# TODO: check if this works with current data generator from disk that relies on file.__next__
# (it doesn't return item back by index)
train_sampler = DistributedSampler(train_dataset)
eval_sampler = DistributedSampler(eval_dataset)
train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=args.train_batch_size)
eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args.train_batch_size)
#optimizer
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if
not any(nd in n for nd in no_decay)],
'weight_decay': 0.01},
{'params': [p for n, p in model.named_parameters() if any(
nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=args.adam_epsilon)
scheduler = get_linear_schedule_with_warmup(
optimizer, 0, len(train_dataloader))
model.train()
tr_loss = 0
global_step = 0
acc = 0
train_loss = 0.0
nb_tr_examples, nb_tr_steps = 0, 0
for _ in trange(int(args.num_train_epochs), desc="Epoch"):
for step, batch in enumerate(tqdm(train_dataloader, desc="Iteration")):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, segment_ids, lm_label_ids, is_next = batch
outputs = model(input_ids=input_ids, attention_mask=input_mask, token_type_ids=segment_ids,
labels=lm_label_ids, next_sentence_label=is_next)
loss = outputs.loss
'''
if n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu.
if args.gradient_accumulation_steps > 1:
loss = loss / args.gradient_accumulation_steps
if args.fp16:
optimizer.backward(outputs.loss)
else:
loss.backward()
'''
loss.backward()
tr_loss += loss.item()
nb_tr_examples += input_ids.size(0)
nb_tr_steps += 1
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1)
optimizer.step()
scheduler.step()
model.zero_grad()
global_step += 1
'''
if (step + 1) % args.gradient_accumulation_steps == 0:
# modify learning rate with special warm up BERT uses
lr_this_step = args.learning_rate * warmup_linear(global_step / num_train_steps, args.warmup_proportion)
for param_group in optimizer.param_groups:
param_group['lr'] = lr_this_step
optimizer.step()
scheduler.step()
optimizer.zero_grad()
global_step += 1
'''
train_loss = tr_loss / global_step
perplexity = torch.exp(torch.tensor(train_loss)).item()
print("Training loss {} ".format("{:.3f}".format(train_loss)))
print("Training perplexity {}".format("{:.3f}".format(perplexity)))
logger.info("***** Running evaluation *****")
logger.info(" Num examples = %d", len(eval_dataset))
logger.info(" Batch size = %d", batch_size)
eval_loss = 0.0
acc = 0
nb_eval_steps = 0
for batch in tqdm_notebook(eval_dataloader, desc='Evaluating'):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, segment_ids, lm_label_ids, is_next = batch
with torch.no_grad():
outputs = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next)
loss = outputs.loss
eval_loss += loss.mean().item()
nb_eval_steps += 1
eval_loss = eval_loss / nb_eval_steps
perplexity = torch.exp(torch.tensor(eval_loss)).item()
print("Evalution loss {} ".format("{:.3f}".format(eval_loss)))
print("Evalution perplexity {}".format("{:.3f}".format(perplexity)))
if not os.path.exists(output_dir):
os.makedirs(output_dir)
print("Saving model to %s" % args.output_dir)
# Save a trained model, configuration and tokenizer using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training
model_to_save.save_pretrained(args.output_dir)
tokenizer.save_pretrained(args.output_dir)
# Save a trained model
#logger.info("** ** * Saving fine - tuned model ** ** * ")
model_to_save = model.module if hasattr(model, 'module') else model # Only save the model it-self
#if args.do_train:
# model_to_save.save_pretrained(self.output_dir)
# tokenizer.save_pretrained(self.output_dir)
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
"""Truncates a sequence pair in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
if __name__ == "__main__":
main()
```
```
######Message Error#########
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
THCudaCheck FAIL file=/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu line=115 error=710 : device-side assert triggered
Iteration: 0% 0/8312 [00:00<?, ?it/s]
Epoch: 0% 0/4 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/content/run.py", line 748, in <module>
main()
File "/content/run.py", line 651, in main
labels=lm_label_ids, next_sentence_label=is_next)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 955, in forward
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 962, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 2468, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 2264, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:115
```<|||||>You set your MLM labels to -1 when padding. You should set them to -100 if you want them to be ignored. See the [docs](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertForPreTraining.forward).<|||||>same error with -100 padding but in this line lm_label_ids = ([-1] + t1_label + [-1] + t2_label + [-1]) when i set -1 to -100 my script run as well but i do not this way is correct or not ?<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,546 | closed | Entity level F-1 scores in run_ner.py | # 🚀 Feature request
The run_ner.py script in examples/token_classification reports evaluation metrics in terms of token F-1 scores (from what I can tell by examining the code). This issue is to request entity level F-1 scores as an evaluation metric. Token level scores will evaluate the F-1 score over an input such as ["I", "work", "for", "ABC", "com", "##pany", "in", "New", "York", "City"] with the label for each token considered separately. In this example, if the ground truth labels are ["O", "O", "O", "ORG", "O", "O", "O", "GPE", "GPE", "GPE"] the label for "New" , "York" and "City" from the predictions would be compared against ground truth separately. Entity level scores would mark a true positive if for instance all 3 tokens in the span "New York City" are labelled correctly.
## Motivation
Token level F-1 scores can be a more lenient metric compared to entity level scores since labels on sub-words/entities are considered separately.
## Contribution
I can also help implement this but I am not certain how I should get started with this. Any help is appreciated. | 01-12-2021 20:55:19 | 01-12-2021 20:55:19 | Oh this makes me realize this script hasn't been switched to use the datasets metrics for seqeval. This would solve the issue as it computes all scores. Will do that tomorrow.<|||||>Thank you @sgugger |
transformers | 9,545 | closed | Doc: Update pretrained_models wording | To clarify things cf. this tweet for instance https://twitter.com/RTomMcCoy/status/1349094111505211395
| 01-12-2021 20:51:45 | 01-12-2021 20:51:45 | |
transformers | 9,544 | closed | RFC: ternary assignment style in transformers code revisited | I run this by @patrickvonplaten and he encouraged I post it here, and he wrote a follow up which I will post next.
---------------------------
As `transformers`'s mandate is to be super user-friendly code-wise I wanted to ask whether very frequently used in the `transformers` code ternary assignment `a = x if z else y` actually supports that mandate.
It is used a lot:
```
grep -Ir if src/transformers | grep else | grep = | wc -l
1043
```
As I was just writing some code where I had:
```python
if args.deepspeed:
self.model = model.module
else:
self.model = model
```
I then rewrote it in the `transformers` style of:
```python
self.model = model.module if args.deepspeed else model
```
and then I realized that my original code is a way more readable and my "internal" compiler instantly gets it and moves on, whereas the ternary style is super slow to ingest. It could be just me, for me vertical alignment helps reading code a lot!
Surely, that's 1 line vs 4. So each of them has their pros and cons.
I'm sure it won't be too hard to find much less readable nested ternary assignment code in `transformers`, e.g.:
```python
self.device = device if framework == "tf" else torch.device("cpu" if device < 0 else "cuda:{}".format(device))
```
as compared to rewriting it as:
```python
if framework == "tf":
self.device = device
elif device < 0:
self.device = torch.device("cpu")
else:
self.device = torch.device("cuda:{}".format(device))
```
- Does it take many more lines and fits less code into the screen - hell yeah
- Is it much more readable - IMHO absolutely! Especially due to the vertical alignment
- Is it less error-prone - very likely.
I'd have even split the `elif` to clearly see that a different group of conditionals is being tested in the second part, but that's just personal style.
Binary search always beats linear search, even for a small number of items.
Another advantage of unwrapped ternary op is very noticeable during interactive debug sessions - you can't easily break or step through such one-liners, especially when the juggled values aren't variables but function calls.
I just find it somewhat inconsistent that this developer collective tries hard to avoid `map`, `filter` and `reduce` as more difficult to read, yet ternary style is used very often. To me the 3 mentioned operators are in the same category as ternary operators readability-wise since they require horizontal reading. That's just my perception of course.
This is not a critique but rather a question of whether this part of style is intentional or just came about because someone likes vertically compact code and is good at reading horizontal logic. At the end of the day, this is not a deal breaker, it just takes me much longer to get such code.
Thank you.
@LysandreJik, @sgugger | 01-12-2021 20:47:42 | 01-12-2021 20:47:42 | here is @patrickvonplaten's follow up posted with his permission (we initially did it over gist):
------------------
Thanks for the write-up!
I guess everybody has a slightly different opinion regarding ternary assignmens . My opinion is:
1) I would never write a if-elif-else statement in one line (I don't think we have many lines like this in transformers)
2) I do use the ternary assignment quite a lot, but only for IMO "simple" statements like:
```do_sample = do_sample if do_sample is not None else self.config.do_sample``` or
```all_attentions = () if output_attentions else None```
I would also use it for your example above ``` self.model = model.module if args.deepspeed else model``` I guess, but I wouldn't use it for complexer statements, such as this one: https://github.com/huggingface/transformers/blob/61443cd7d917ef323a799ee27bb4abc4344f0d11/src/transformers/models/t5/modeling_t5.py#L887 *e.g.*.
3) I do look quite a bit into how the coding style is in the respective file. E.g. `generation_utils.py` has a slightly different style from the `modling_...py` files from `trainer.py` IMO -> so I try to adapt here. *E.g.* I think Quentin likes to use statements like ```dataset = self.manual_data or self.default_data``` in datasets which I then adapt to, but that's something we don't do in transformers (not really sure why)
4) I think code design also depends quite a bit on the coding environment someone has set up for him/herself. I'm using a rather special neovim+tmux+zsh setup, which I think shows the code differently to all the vscode users. *E.g.* I don't mind having long lines (>119) because it's nicely displayed in Vim for me, but Sylvain is not a big fan of it afaik.
But I'm happy to have some stricter rules on code design! We should probably include Sylvain and Lysandre then as well.
Some other conventions we could/should standardize:
- Only use f-strings => Sylvain really wants us to use f-strings and I also think that they are the nicest design
- stricter ordering of where docstring, helper functions and classes should be in the `modeling_....py` files
- I don't really like nested if-else statements. E.g. I prefer:
```python
if a and b:
# ...
elif a and not b:
# ...
elif not a and b:
# ...
else:
# ...
```
very much over
```python
if a:
if b:
# ...
else:
# ...
else:
if b:
# ...
else:
# ...
```
- I don't like it when variable have one-letter names. I working quite heavily with search and search-and-replace patterns in vim to understand/refactor code and I think they are not very readable => so I think it's always better to have at least some what understandable variable names. *e.g.* even if everybody knows the q,k,v logic in Transformers IMO, `query_states`, `value_states` and `key_states` are better names.<|||||>I personally don't think the ternary style is harder to read when it's for a very simple condition, on the contrary. Like Patrick said, I would not use it for a `if-elif-else statement` as it then is harder to read and understand, but for something like
```
if args.deepspeed:
self.model = model.module
else:
self.model = model
```
is typically the situation where I would encourage a ternary line.
The only other situation I don't use them is if the formatter gets in the way because the line is long, as it's then clearer in the unrolled version.
> Another advantage of unwrapped ternary op is very noticeable during interactive debug sessions - you can't easily break or step through such one-liners
If only used for simple tests like I mentioned above, this should be a no-problem.
I agree with patrick all other comments (especially the f-strings! if there is one thing my brain has trouble parsing, it's the `.format(...)`. And I would extend one-letter variable names to non-standard abbreviations in general, as it makes the code harder to read for non-native English speakers.
However, fixing current files to fix those guidelines is not a priority IMO. I would put finishing the decoupling of the models (removing things like the `Summary` class in modeling_utils) above and it's already low priority for me. We're not many core maintainers and there is lots to do!<|||||>Thank you for sharing your preferences
Oh, in no way I was suggesting that we need to change anything. I'm just observing the readability impact for myself and was curious to hear whether others find it the same. But so far clearly it's not the case.
Having coded most of my programming life in Perl I guess I got used to aligning things vertically the way that made them most readable, since Perl has no indentation requirements, so I have always aligned assignments and branches for the fastest possible reading.
But, I have no problem with the ternary assignment since that's the style of this project and you seem to prefer it, and so that it's important to remain consistent.
I am totally with you on the f-strings, - I was trying to keep this focused to just one subject matter, but if you'd like to expand it to other style issues, we can easily do that.
We can also close it this rfc at any time, if you feel there is nothing else that needs to be said or done, as the two of you expressing that you like ternary style is sufficient to not needing to continue.
<|||||>Thanks for bringing this issue! Like Sylvain, I personally don't think the ternary style is harder to read. As you've said @stas00, I think it depends on the language one is used to; since I like to say that Python can nearly be "read" like prose or natural language, this is the case where it shines:
The following statement is way closer to natural language
```py
self.model = model.module if args.deepspeed else model
```
than the following
```py
if args.deepspeed:
self.model = model.module
else:
self.model = model
```
even if the latter will probably be easier to read to users coming from different languages than Python.
Regarding the `map`/`filter` and other lambda methods, this is a very personal choice but they're (usually!) harder to read than the list/dict comprehensions that can replace. Once again, this is a very opinionated statement.<|||||>Thank you all for your feedback.
It's loud and clear ternary ops are the norm at this project, with the recommendation to avoid nested ternary ops in the new future code. |
transformers | 9,543 | closed | Generating sequence from two input sequences | The code here
https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_trainer.py
enables training `distilbart-xsum-12-6` model that gets an input sequence and outputs an input sequence. Is there a simple way to adapt the code to enable it to get two input sequences (e.g sentence and context sentence) and output a sequence?
It seems to me that I need to use something like that: https://huggingface.co/transformers/preprocessing.html#preprocessing-pairs-of-sentences, but wasn't sure how to combine it with the bart code.
| 01-12-2021 17:13:25 | 01-12-2021 17:13:25 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead? You'll get more answers there.
Thanks! |
transformers | 9,542 | closed | Is the GPT-2 forward too different from Bert or RoBerta? | I am using some Transformers (Bert, RoBerta, etc.) in my project.
When including the `GPT-2` as shown below:
```python
from transformers import GPT2Tokenizer, GPT2Model
import torch
# inits model and tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model = GPT2Model.from_pretrained('gpt2')
# inputs
inputs = tokenizer.encode(text="Hello, my dog is cute", max_length=12, padding="max_length",
truncation=True)
#[15496, 11, 616, 3290, 318, 13779, 50257, 50257, 50257, 50257, 50257, 50257]
# input_ids tensor and attention mask
features = torch.tensor([inputs])
# tensor([[15496, 11, 616, 3290, 318, 13779, 50257, 50257, 50257, 50257,
# 50257, 50257]])
attention_mask = (features < 50257).int()
# tensor([[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0]], dtype=torch.int32)
# outputs
outputs = model(
input_ids=features,
attention_mask=attention_mask
)
```
I have had the following error:
```python
IndexError Traceback (most recent call last)
<ipython-input-31-763ba5835cf8> in <module>()
----> 1 outputs = model(input_ids=features, attention_mask=attention_mask)
4 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py in forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions, output_hidden_states, return_dict)
679
680 if inputs_embeds is None:
--> 681 inputs_embeds = self.wte(input_ids)
682 position_embeds = self.wpe(position_ids)
683 hidden_states = inputs_embeds + position_embeds
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
--> 126 self.norm_type, self.scale_grad_by_freq, self.sparse)
127
128 def extra_repr(self) -> str:
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1850 # remove once script supports set_grad_enabled
1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1853
1854
IndexError: index out of range in self
``` | 01-12-2021 15:49:04 | 01-12-2021 15:49:04 | the `IndexError` is from the padding token being added. GPT2 doesn't have a padding token. You should be able to manually set that input_id to 0 (or any other valid input id) and then rely on the attention mask to ignore those positions. <|||||>> the `IndexError` is from the padding token being added. GPT2 doesn't have a padding token. You should be able to manually set that input_id to 0 (or any other valid input id) and then rely on the attention mask to ignore those positions.
But I had already added the `PAD` token which receives the `token_id = 50257`:
```
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
```<|||||>that tells the tokenizer to add a token and creates the new token id, but it doesn't modify the embedding layer of the model. the new token id is still invalid for the embedding layer of GPT-2 (which does not include a pad token). the reason it works for roberta and bert is b/c they were trained with pad tokens and therefore have entries in their embedding layers for that token. you want to do something like,
`inputs[inputs==50257] == 0`<|||||>Hi! In the [documentation of `add_special_tokens`](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=resize_token_embeddings#transformers.tokenization_utils_base.SpecialTokensMixin.add_special_tokens), you'll see in the sample code the following line:
```py
model.resize_token_embeddings(len(tokenizer))
```
as @galtay mentions, you need to resize the embedding layer when adding tokens to the tokenizer, otherwise the model will not know that its embedding matrix has been resized.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,541 | closed | Fix fill mask pipeline slow test using deprecated argument | Uses `topk` which was deprecated in removed in favor of `top_k` | 01-12-2021 15:47:32 | 01-12-2021 15:47:32 | |
transformers | 9,540 | closed | bounding by compute, retraining from the time the model is killed | Hi
I have limited access to GPUs, with limited hours, I am using finetune_trainer.py is there a way I can retrain the models from the time it is killed? could you assist me and give me some hints?
thanks | 01-12-2021 15:13:13 | 01-12-2021 15:13:13 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,539 | closed | LayoutLM Config | The `LayoutLM` configuration inherits from `BertConfig`, which should be fixed.
This was failing the `test_parents_and_children_in_mappings` test as it was placed after the `BertConfig` in the `AutoModelForSequenceClassification` test. | 01-12-2021 14:43:58 | 01-12-2021 14:43:58 | |
transformers | 9,538 | closed | fix BlenderbotSmallTokenizer | # What does this PR do?
`BlenderbotSmallTokenizer` returns `token_type_ids` but those are not needed by the model. This PR fixes the tokniezer to not return `token_type_ids` | 01-12-2021 14:30:38 | 01-12-2021 14:30:38 | |
transformers | 9,537 | closed | BertForTokenClassificiation save | How can I save the BertForTokenClassificiation model?
| 01-12-2021 13:41:42 | 01-12-2021 13:41:42 | You can save a model using the `.save_pretrained()` method. So given that your model is called `model`, you can save it as follows:
`model.save_pretrained(path_to_directory) `<|||||>Thank you for your reply. I already tried it. But got an error like this.
```
'BertForTokenClassification' object has no attribute 'save_pretrained'
```
<|||||>Can you share some more code about how you created the model?<|||||>```
from pytorch_pretrained_bert import BertForTokenClassification
model = BertForTokenClassification.from_pretrained("bert-base-cased", num_labels=len(tag2idx))
```
after that fine-tuning it. Then want to save this model.<|||||>ok, I got it.. I can save it by torch.. thank you |
transformers | 9,536 | closed | [WIP][EncoderDecoder] Fix label behavior | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-12-2021 13:39:50 | 01-12-2021 13:39:50 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,535 | closed | strange output of fast/slow tokenizers | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.6.6
- PyTorch version (GPU?): 1.5.0+cpu (False)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: @mfuntowicz
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
@thomwolf
@Narsil
-->
## Information
First of all, this might be a problem of the fast tokenizer, ,but I am not 100% sure, because the bug occurs when I use `AutoTokenizer`, for which the code is in `transformers`. I didn't check the usage directly from the tokenizer's object.
Second, the problematic string is not meaningful. I am glad to see the fast tokenizer is available for XLM-Roberta, and I prefer to make sure it works as expected. So I compared the results from the slow/fast tokenizers in order to make sure the results are the same.
However, with this string (without the leading/training single quote)
'=LLC-s nmrcsss mtiaiol!@"ccc technooay @"ccc"@"ccc'
The result is different, you can see the output below.
Fast tokenizer
```
{'input_ids': [[0, 2203, 23708, 441, 9, 7, 653, 39, 19437, 7, 7, 7, 347, 4526, 34837, 38, 981, 58, 238, 10060, 128500, 31, 3337, 1374, 58, 238, 10060, 58, 981, 58, 238, 10060, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
```
Slow tokenizer
```
{'input_ids': [[0, 2203, 23708, 441, 9, 7, 653, 39, 19437, 7, 7, 7, 347, 4526, 34837, 38, 981, 58, 238, 10060, 128500, 31, 3337, 1374, 58, 238, 10060, 58, 981, 58, 10060, 238, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
```
The encoding at the end are
`... 58, 238, 10060, 2` and `... 58, 10060, 238, 2` respectively.
Furthermore, if I remove any character from it, the results become the same of these 2 tokenizers.
## To reproduce
Steps to reproduce the behavior:
```
from transformers import AutoTokenizer
tokenizer_fast = AutoTokenizer.from_pretrained("xlm-roberta-large", use_fast=True)
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large", use_fast=False)
# This string cause problem.
s = '=LLC-s nmrcsss mtiaiol!@"ccc technooay @"ccc"@"ccc'
o1 = tokenizer_fast.batch_encode_plus([s])
o2 = tokenizer.batch_encode_plus([s])
if not o1 == o2:
print('output are different!')
print(f'string: {s}')
print(o1)
print(o2)
# check substring
s2 = s
m = 0
while True:
m += 1
if m > 100000:
break
n = len(s2)
for i in range(0, n):
# substring of one char removed
s_temp = s2[0:i] + s2[i+1:]
o1 = tokenizer_fast.batch_encode_plus([s_temp])
o2 = tokenizer.batch_encode_plus([s_temp])
if not o1 == o2:
print(s_temp)
print('-------------------')
s2 = s_temp
break
if len(s2) == n:
print('no substring with 1 char fewer cause problem, stopped.')
break
```
Output:
```
output are different!
string: =LLC-s nmrcsss mtiaiol!@"ccc technooay @"ccc"@"ccc
{'input_ids': [[0, 2203, 23708, 441, 9, 7, 653, 39, 19437, 7, 7, 7, 347, 4526, 34837, 38, 981, 58, 238, 10060, 128500, 31, 3337, 1374, 58, 238, 10060, 58, 981, 58, 238, 10060, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
{'input_ids': [[0, 2203, 23708, 441, 9, 7, 653, 39, 19437, 7, 7, 7, 347, 4526, 34837, 38, 981, 58, 238, 10060, 128500, 31, 3337, 1374, 58, 238, 10060, 58, 981, 58, 10060, 238, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
no substring with 1 char fewer cause problem, stopped.
Process finished with exit code 0
```
## Expected behavior
The results are expected to be the same.
| 01-12-2021 13:32:45 | 01-12-2021 13:32:45 | @chiapas , We will review this issue and propose code changes soon.<|||||>@chiapas , I executed the same code in GoogleColab. I am getting the same tokens for both of them. Here is the output.
`{'input_ids': [[0, 2203, 23708, 441, 9, 7, 653, 39, 19437, 7, 7, 7, 347, 4526, 34837, 38, 981, 58, 238, 10060, 128500, 31, 3337, 1374, 58, 238, 10060, 58, 981, 58, 238, 10060, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
{'input_ids': [[0, 2203, 23708, 441, 9, 7, 653, 39, 19437, 7, 7, 7, 347, 4526, 34837, 38, 981, 58, 238, 10060, 128500, 31, 3337, 1374, 58, 238, 10060, 58, 981, 58, 238, 10060, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
no substring with 1 char fewer cause problem, stopped.`
Can you please give me more details about the issue ?<|||||>Looks like this issue doesn't occurs if I upgrade my python version to `3.6.9` from `3.6.6`. I am not sure why there is a problem when this code sample is executed with python `3.6.6`, but since it is quite old version, I won't ask further investigation, and this issue could be closed. |
transformers | 9,534 | closed | Need clarification in /examples/research_projects/rag/use_own_knowledge_dataset.py | Please explain the difference between:
'facebook/dpr-ctx_encoder-single-nq-base' and 'facebook/dpr-ctx_encoder-multiset-base'
Which datasets are the 2 models trained on? | 01-12-2021 11:12:03 | 01-12-2021 11:12:03 | Indeed, the models have no model card. Maybe @lhoestq can help you out! <|||||>Hi !
- 'facebook/dpr-ctx_encoder-single-nq-base' is the DPR context encoder model trained on NQ alone
- 'facebook/dpr-ctx_encoder-multiset-base' is the DPR context encoder model trained on the multiset/hybrid dataset defined in the paper. It includes Natural Questions, TriviaQA, WebQuestions and CuratedTREC<|||||>Thanks for clarifying @lhoestq |
transformers | 9,533 | closed | xla_spawn.py crashes when training on TPU V3-32 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1
- Platform: Google Cloud debian-9-torch-xla-v20201215
- Python version: 3.7
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?):
- Using GPU in script?: NO; using TPUS
- Using distributed or parallel set-up in script?:
### Who can help
@sgugger @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
I am using an Albert base.
The problem arises when using:
* [x] the official example scripts: (give details below)
Using the examples/xla_spawn.py together with run_mlm.py it crashes when we try to use it with v3-32. We're supposed to set num cores to either 1 or 8 but in our case we have 32 cores and it raises an error. We've also tried to let that variable to 1 or 8, but in both cases it raises errors:
```
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
Exception in device=TPU:0: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:1229 : Check failed: session.Run({tensorflow::Output(result, 0)}, &outputs) == ::tensorflow::Status::OK() (Internal: From /job:tpu_worker/replica:0/t
ask:0:
2 root error(s) found.
(0) Internal: Invalid system configuration: 2x2 host topology with 0 missing hosts, but 1 hosts in total.
[[{{node configure_distributed_tpu/_0}}]]
[[ConfigureDistributedTPU_G3]]
(1) Internal: Invalid system configuration: 2x2 host topology with 0 missing hosts, but 1 hosts in total.
[[{{node configure_distributed_tpu/_0}}]]
0 successful operations.
0 derived errors ignored. vs. OK)
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
xla::XrtComputationClient::InitializeAndFetchTopology(std::string const&, int, std::string const&, tensorflow::ConfigProto const&)
xla::XrtComputationClient::InitializeDevices(std::unique_ptr<tensorflow::tpu::TopologyProto, std::default_delete<tensorflow::tpu::TopologyProto> >)
xla::XrtComputationClient::XrtComputationClient(xla::XrtComputationClient::Options, std::unique_ptr<tensorflow::tpu::TopologyProto, std::default_delete<tensorflow::tpu::TopologyProto> >)
xla::ComputationClient::Create()
xla::ComputationClient::Get()
_PyCFunction_FastCallDict
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyObject_Call
_PyObject_GenericGetAttrWithDict
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyObject_Call
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyEval_EvalCode
PyRun_StringFlags
PyRun_SimpleStringFlags
Py_Main
main
__libc_start_main
*** End stack trace ***
Traceback (most recent call last):
File "transformers/examples/xla_spawn.py", line 85, in <module>
main()
File "transformers/examples/xla_spawn.py", line 81, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn
start_method=start_method)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
while not context.join():
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 112, in join
(error_index, exitcode)
Exception: process 0 terminated with exit code 17
```
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name): MLM
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Initialize a TPU V3-32 and when running xla_spawn.py, set the number of cores to either 32, 8 or 1. In all three cases it raises an error.
## Expected behavior
There should not be any problem in setting the number of cores to the number of TPU cores we actually have. It really does not make sense to be able to train only with either 1 core or 8 cores... | 01-12-2021 10:48:38 | 01-12-2021 10:48:38 | Hmmm this may be because a TPU v3-32 regroups several TPU chips, as the error here seems to imply:
```
Internal: Invalid system configuration: 2x2 host topology with 0 missing hosts, but 1 hosts in total.
```
@sgugger can you confirm this is the source of the issue? Do you know the status of the Trainer/xla_spawn on TPU pods?<|||||>@LysandreJik If that's the source of the issue, what would be the procedure to solve it?<|||||>From the stack trace it doesn't look like it even gets to the training script, so I think there might be something wrong in your distributed TPU setup. Are you able to run another script (coming from official torch XLA for instance?) on this setup?<|||||>What do you mean by setup??
```{bash}
XRT_TPU_CONFIG="tpu_worker;0;10.157.150.13:8470"
```
This is the configuration parameter I set before calling the training script, which starts like this:
```{bash}
python transformers/examples/xla_spawn.py --num_cores 8 \
transformers/examples/language-modeling/run_mlm.py \
```
I also tried setting the --num_cores to 1 (it only accepts 1 or 8) in the second code snippet.
With v3-8 this setup works correctly, I don't know if you mean this by setup...<|||||>I mean the setup you are using. It can't be the same setup for one TPU (8cores) and a TPU pod (for the 32 cores). The second requires to launch different machines. That's why I was asking if you could run another example from someone else on your TPU v3-32.
Also, the launcher scrip `xla_spawn` only works for one TPU, not a TPU pod as fa as I know, so you will need to launch the script in a different way.<|||||>@sgugger does ```xla_spawn ``` not support TPU pod? As many issues in this repo are related to TPU pod, so I have thought ```xla_spawn``` also support it. Do you know any examples of using TPU pod?<|||||>Sorry I didn't write it well. I meant the launcher script `xla_spawn` has only been tested for one TPU, not a TPU pod as far as I know. So you may need to launch the script in a different way.
I am not aware of anyone launching any of the example scripts on a TPU pod successfully, so I don't know if they work or not. <|||||>I see.
That information should be added to the document if it does not.
By the way, TFTrainer support TPU pod? I think it does, but I have not tested yet.<|||||>Same thing, it has not been tested. We don't have resources setup to test for more than a single TPU (so 8 cores).<|||||>I understand.
Thank you for answering.
<|||||>Okay, so I'd need to change the setup for a TPU pod then... I don't understand why all this complication to go from 8 cores to 32 cores actually, I know that's on Google's side, but I don't think it makes sense to complicate things so much to be able to train on a 32 cores TPU. As I understood, not only the setup must be changed, but also the script to launch the xla, right? I mean, the xla_spawn.py from Transformers is thought for 1 TPU, and it may crash on multiple TPU nodes?<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>I am having the same issue (xla_spawn works fine with 8 core TPU but fails with TPU pods v3-32 and above). Is there a way to utilize TPU pods with the transformers library? <|||||>Hi @sgugger, I am running xla_spawn, can I know is this correct way to run the hugginface examples on TPU pods like v3-32? Thanks
```
TPU_NAME=tpu-v3-32
python3 -m torch_xla.distributed.xla_dist \
--tpu=${TPU_NAME} --restart-tpuvm-pod-server -- \
python3 /transformers/examples/pytorch/xla_spawn.py --num_cores 8 /pytorch/text-classifications/run_glue.py \
--model_name_or_path bert-base-cased \
--dataset_name SetFit/mrpc \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--run_name mnli_v3-32_bs-64_lr-2e-5-bert \
--output_dir /tmp/mrpc-bert/ \
--overwrite_output_dir
```
Where I use `python3 xla_dist` to wrap the `python3 xla_spawn.py --num_cores 8`<|||||>The examples can't be launched directly on TPU pods. cc @muellerzr who has worked on them with accelerate and can share how to run an example on a TPU pod.<|||||>Hi @sgugger Thanks:). I do successfully run above command on TPU pods (V3-32 and V4-64), see the [wandb results](https://wandb.ai/jianguozhang/huggingface/reports/mrpc-for-text-classification--VmlldzozMzU2NTE1?accessToken=hce0jseir4d3x32cqbocha2936xes1r1hbeupgijjy6l2lhujtd0y2577xzcdn2c). But i am not sure it whether it is correct way to use the commands as the training loss is much higher than that on GPUs, and V4-64 shows lower running speed than v3-32.
Hi @muellerzr, can you show an example that how to run huggingface torch_xla examples on TPU pods? Thanks:) |
transformers | 9,532 | closed | [Blenderbot] Fix Links | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9527
Credit goes to @LysandreJik for finding the fix.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-12-2021 10:33:12 | 01-12-2021 10:33:12 | Ok now's really the time to remove those hard-coded links once and for all, i think<|||||>Indeed! It's on my todo for the week. |
transformers | 9,531 | closed | Seq2Seq include custom glossary/dictionary | Hello,
Is it possible to include a custom glossary/dictionary while fine-tuning the Seq2Seq model for a specific domain?
So I basically want to ensure that specific words are always translated as they are in the glossary/dictionary.
Thanks for helping out.
coodingnoobneedshelp | 01-12-2021 10:23:45 | 01-12-2021 10:23:45 | Hi @codingnoobneedshelp
Not sure what exactly this means,
What do you mean by
> custom glossary/dictionary
and
> ensure that specific words are always translated as they are in the glossary/dictionary.<|||||>Thanks for the answer. Let me try to clarify this.
From Google: A glossary is a custom dictionary to consistently translate the customer's domain-specific terminology. This typically involves specifying how to translate a named entity.
For example, a Person name: "Peter Eisen" must translate to "Peter Eisen." There are some cases where the model would translate this to "Peter Iron".
So I basically want to have a dictionary that tells the model that "Peter Eisen" should always be "Peter Eisen".
Does anyone know how I can archive that?
Thanks<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,530 | closed | Data format for TFTrainer for TFGpt2 | ```train_dataset``` in TFtrainer needs ```(features, labels)```, but TFGpt2 does not need labels (document in TFGPT2LMHeadModel).
Do I know the data format for TFTrainer for TFGpt2?
I have tried this code, but does not work.
Thanks.
```
def gen_train():
for ex in transformed_ds[datasets.Split.TRAIN]:
yield (
{
'input_ids': ex["input_ids"]
},
{
'labels': ex["input_ids"]
}
)
train_types = (
{
"input_ids": tf.int32
},
{
"labels": tf.int32
},
)
train_shapes = (
{
"input_ids": tf.TensorShape([None])
},
{
"labels": tf.TensorShape([None])
},
)
train_ds = tf.data.Dataset.from_generator(gen_train, train_types, train_shapes)
if train_ds is not None:
train_ds = train_ds.apply(tf.data.experimental.assert_cardinality(len(ds[datasets.Split.TRAIN])))
``` | 01-12-2021 08:59:44 | 01-12-2021 08:59:44 | Hello!
At first, can you share your Transformers and TensorFlow version please?<|||||>@jplu
I use Transformers 4.1.1 and Tensorflow 2.4 in GCP VM, but I can change versions. I will train this model with v3-8 and v3-32.
My code and command are shown below. It is largely based on run_clm.py and run_tf_text_classification.py.
Thanks.
```
import logging
import os
from dataclasses import dataclass, field
from typing import Dict, Optional
import datasets
import numpy as np
import tensorflow as tf
from transformers import (
AutoConfig,
AutoTokenizer,
TFAutoModel,
GPT2Config,
GPT2Tokenizer,
BertTokenizer,
TFGPT2LMHeadModel,
HfArgumentParser,
PreTrainedTokenizer,
TFTrainer,
TFTrainingArguments,
)
def get_tfds(
train_file: str,
tokenizer: PreTrainedTokenizer,
max_seq_length: Optional[int] = None,
):
files = {}
if train_file is not None:
files[datasets.Split.TRAIN] = [train_file]
ds = datasets.load_dataset("csv", data_files=files)
features_name = 'content'
transformed_ds = {}
for k in files.keys():
transformed_ds[k] = ds[k].map(
lambda example: tokenizer.batch_encode_plus(
example[features_name],
truncation=True,
max_length=max_seq_length
),
batched=True,
)
def gen_train():
for ex in transformed_ds[datasets.Split.TRAIN]:
yield (
{
'input_ids': ex["input_ids"]
},
{
'labels': ex["input_ids"]
}
)
train_types = (
{
"input_ids": tf.int32
},
{
"labels": tf.int32
},
)
train_shapes = (
{
"input_ids": tf.TensorShape([None])
},
{
"labels": tf.TensorShape([None])
},
)
train_ds = tf.data.Dataset.from_generator(gen_train, train_types, train_shapes)
if train_ds is not None:
train_ds = train_ds.apply(tf.data.experimental.assert_cardinality(len(ds[datasets.Split.TRAIN])))
return train_ds
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
Using `HfArgumentParser` we can turn this class
into argparse arguments to be able to specify them on
the command line.
"""
train_file: str = field(default=None, metadata={"help": "The path of the training file"})
dev_file: Optional[str] = field(default=None, metadata={"help": "The path of the development file"})
test_file: Optional[str] = field(default=None, metadata={"help": "The path of the test file"})
max_seq_length: int = field(
default=128,
metadata={
"help": "The maximum total input sequence length after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded."
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
"""
model_name_or_path: str = field(
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
use_fast: bool = field(default=False, metadata={"help": "Set this flag to use fast tokenization."})
# If you want to tweak more attributes on your tokenizer, you should do it in a distinct script,
# or just modify its tokenizer_config.json.
cache_dir: Optional[str] = field(
default=None,
metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
)
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TFTrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
)
# tokenizer
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
)
#tokenizer.add_special_tokens({'pad_token': '[PAD]'})
# data
train_dataset = get_tfds(
train_file=data_args.train_file,
tokenizer=tokenizer,
max_seq_length=data_args.max_seq_length,
)
# config
config_kwargs = {
"cache_dir": model_args.cache_dir,
#"use_auth_token": True if model_args.use_auth_token else None,
}
config = GPT2Config.from_pretrained(model_args.model_name_or_path, **config_kwargs)
# model
with training_args.strategy.scope():
model = TFGPT2LMHeadModel.from_pretrained(
model_args.model_name_or_path,
from_pt=bool(".bin" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
# model.resize_token_embeddings(len(tokenizer))
# Initialize our Trainer
trainer = TFTrainer(
model=model,
args=training_args,
train_dataset=train_dataset
)
# Training
if training_args.do_train:
model_path = (
model_args.model_name_or_path
if (model_args.model_name_or_path is not None and os.path.isdir(model_args.model_name_or_path))
else None
)
train_result = trainer.train()
trainer.save_model(model_path) # Saves the tokenizer too for easy upload
#tokenizer.save_pretrained(training_args.output_dir)
output_train_file = os.path.join(training_args.output_dir, "train_results.txt")
if trainer.is_world_process_zero():
with open(output_train_file, "w") as writer:
logger.info("***** Train results *****")
for key, value in sorted(train_result.metrics.items()):
logger.info(f" {key} = {value}")
writer.write(f"{key} = {value}\n")
# Need to save the state, since Trainer.save_model saves only the tokenizer with the model
trainer.state.save_to_json(os.path.join(training_args.output_dir, "trainer_state.json"))
```
```
python3 run_tftrainer.py \
--train_file datasets.csv \
--model_name_or_path gpt2 \
--do_train \
--output_dir model \
--num_train_epochs 4 \
--per_device_train_batch_size 4 \
--logging_steps 10 \
--save_steps 10 \
--overwrite_output_dir \
--max_seq_length 128
```
<|||||>Ok, thanks a lot for sharing this!
If you are using the 4.1.1 release of Transformers, the `TFGPT2LMHeadModel` has a `labels` argument so the problem might come from elsewhere. The other thing to know is that it is currently not possible to train an LM from scratch with TF until the next release (coming very soon), only fine tuning is possible for now.
What is the error you get exactly?<|||||>I see.
Error message is shown below. I have tuned many things, but the loss shows nan at best.
Will it take more than a week for the next release? Then I will use pytorch/xla.
```
All the layers of TFGPT2LMHeadModel were initialized from the model checkpoint at gpt2.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFGPT2LMHeadModel for predictions without further training.
Traceback (most recent call last):
File "/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/eager/context.py", line 2102, in execution_mode
yield
File "/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 758, in _next_internal
output_shapes=self._flat_output_shapes)
File "/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/ops/gen_dataset_ops.py", line 2610, in iterator_get_next
_ops.raise_from_not_ok_status(e, name)
File "/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot batch tensors with different shapes in component 0. First element had shape [65] and element 1 had shape [31].
[[{{node MultiDeviceIteratorGetNextFromShard}}]]
[[RemoteCall]] [Op:IteratorGetNext]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_tftrainer.py", line 198, in <module>
train_result = trainer.train()
File "/home/kykim/.local/lib/python3.6/site-packages/transformers/trainer_tf.py", line 548, in train
for step, batch in enumerate(train_ds):
File "/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/distribute/input_lib.py", line 649, in __next__
return self.get_next()
File "/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/distribute/input_lib.py", line 694, in get_next
self._iterators[i].get_next_as_list_static_shapes(new_name))
File "/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/distribute/input_lib.py", line 1474, in get_next_as_list_static_shapes
return self._iterator.get_next()
File "/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py", line 581, in get_next
result.append(self._device_iterators[i].get_next())
File "/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 825, in get_next
return self._next_internal()
File "/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 764, in _next_internal
return structure.from_compatible_tensor_list(self._element_spec, ret)
File "/usr/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/eager/context.py", line 2105, in execution_mode
executor_new.wait()
File "/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/eager/executor.py", line 67, in wait
pywrap_tfe.TFE_ExecutorWaitForAllPendingNodes(self._handle)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot batch tensors with different shapes in component 0. First element had shape [65] and element 1 had shape [31].
[[{{node MultiDeviceIteratorGetNextFromShard}}]]
[[RemoteCall]]
2021-01-12 19:57:25.672632: W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]]
```<|||||>I know that `tf.data.Dataset.from_generator` has some issues on TPUs, can you rewrite your data processing function to use `tf.data.Dataset.from_tensor_slices` instead?
Also, to be sure that your data are properly formatted you can add an assert to checks this. In order to know if the problems comes from there or not. |
transformers | 9,528 | closed | Print All Tokens Over a Certain Probability Threshold: T5 | `This works with GPT-2, but not with T5. Is it possible to adapt this to make T5 work? This works with GPT-2, but not with T5. Is it possible to adapt this to make T5 work?`
```
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = AutoModelWithLMHead.from_pretrained("t5-base")
input_txt = "Hello, my name is Sylvain."
inputs = tokenizer(input_txt, return_tensors='pt')
outputs = model(**inputs)
predictions = F.softmax(outputs[0], dim=-1)
thresh = 1e-2
vocab_size = predictions.shape[-1]
idxs = torch.arange(0, vocab_size)[predictions[0][-1] >= thresh]
print(tokenizer.convert_ids_to_tokens(idxs))
```
`ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds` | 01-12-2021 02:04:20 | 01-12-2021 02:04:20 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Have you checked the [T5 docs](https://huggingface.co/transformers/model_doc/t5.html) regarding the `decoder_inputs`? Are they unclear?
Thanks!<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,527 | closed | [BlenderbotSmallTokenizer] Cannot download tokenizer | When running:
```python
from transformers import BlenderbotSmallTokenizer
tok = BlenderbotSmallTokenizer.from_pretrained("facebook/blenderbot_small-90M")
```
the command fails with the error
```~/python_bin/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs)
1894 # Instantiate tokenizer.
1895 try:
-> 1896 tokenizer = cls(*init_inputs, **init_kwargs)
1897 except OSError:
1898 raise OSError(
~/python_bin/transformers/models/blenderbot_small/tokenization_blenderbot_small.py in __init__(self, vocab_file, merges_file, bos_token, eos_token, unk_token, pad_token, **kwargs)
107
108 with open(vocab_file, encoding="utf-8") as vocab_handle:
--> 109 self.encoder = json.load(vocab_handle)
110 self.decoder = {v: k for k, v in self.encoder.items()}
111 with open(merges_file, encoding="utf-8") as merges_handle:
/usr/lib/python3.7/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
294 cls=cls, object_hook=object_hook,
295 parse_float=parse_float, parse_int=parse_int,
--> 296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
297
298
/usr/lib/python3.7/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
346 parse_int is None and parse_float is None and
347 parse_constant is None and object_pairs_hook is None and not kw):
--> 348 return _default_decoder.decode(s)
349 if cls is None:
350 cls = JSONDecoder
/usr/lib/python3.7/json/decoder.py in decode(self, s, _w)
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
/usr/lib/python3.7/json/decoder.py in raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
This is strange since `"facebook/blenderbot_small-90M"` is just a copy of `"facebook/blenderbot-90M"` which works:
```python
from transformers import BlenderbotSmallTokenizer
tok = BlenderbotSmallTokenizer.from_pretrained("facebook/blenderbot-90M")
``` | 01-12-2021 00:59:51 | 01-12-2021 00:59:51 | Is `blenderbot-small` a valid model_type?
Or maybe this file https://huggingface.co/facebook/blenderbot_small-90M/blob/main/tokenizer_config.json is an issue?<|||||>yeah, `blenderbot-small` is valid. One can download both the model and the config correctly:
```python
from transformers import BlenderbotSmallModel
model = BlenderbotSmallModel.from_pretrained("facebook/blenderbot_small-90M")
```
I think it has something to do with the tokenizers `vocab.json` file. But it's 1-to-1 the same file as in `"facebook/blenderbot-90M"` which can correctly be loaded...I'll have to check in more detail in the next days. There is probably a problem with the BlenderbotSmallTokenizer |
transformers | 9,526 | open | Siamese Multi-depth Transformer-based Hierarchical Encoder | # 🌟 New model addition
## Model description
Recently Google is published paper titled ["Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching"](https://arxiv.org/abs/2004.12297). And according to paper for long-form document matching SMITH model outperforms the previous state-of-the-art models including hierarchical attention, multi-depth attention-based hierarchical recurrent neural network, and BERT.
I feel it is will add value to already awesome transformers models collection :slightly_smiling_face:
<!-- Important information -->
## Open source status
* [X] the model implementation is available: https://github.com/google-research/google-research/tree/master/smith
* [X] the model weights are available: [SMITH-WP+SP model checkpoint](http://storage.googleapis.com/gresearch/smith_gwikimatch/smith_wsp_pretrain_ckpt_opensource.zip) and [GWikiMatch data](http://storage.googleapis.com/gresearch/smith_gwikimatch/gwikimatch_open_source.zip)
* [X] who are the authors: https://github.com/yangliuy, https://github.com/eladeban
| 01-11-2021 21:19:30 | 01-11-2021 21:19:30 | Linking Haystack issue https://github.com/deepset-ai/haystack/issues/719<|||||>Frequent user of hugging face here, I'm a fan of this new publication and would love to see it implemented. Commenting here for the GitHub algorithm to ++<|||||>Hi all, rather than waiting for the implementation in huggingface. Is there a simple way to utilize the pretrained model from the smith repo on our own dataset (to generate document embedding)? |
transformers | 9,525 | closed | mBART is not saving (learned) position embeddings | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux
- Python version: 3.8.2
- PyTorch version (GPU?): 1.4.0 (with gpu)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
## Information
I am fine-tuning mBART-large on MLSUM (Spanish, and also Russian). However, I noticed two things:
- The saved checkpoints are not saving the position embeddings (`BartLearnedPositionalEmbedding`, for both encoder and decoder).
- Due to this, ROUGE scores on the validation set when evaluating on loaded checkpoints are lower than those which were shown during training.
I noticed that the mBART config includes:
```
keys_to_never_save = [
"model.encoder.embed_positions.weight",
"model.decoder.embed_positions.weight",
]
```
and likewise for `keys_to_ignore_on_load_missing`. I suppose this was done in response to issue [#7296](https://github.com/huggingface/transformers/issues/7296). This would be fine if the mBART position embeddings were static, but they seem to be learned. The [mbart configuration](https://github.com/huggingface/transformers/blob/master/src/transformers/models/mbart/configuration_mbart.py) shows `static_position_embeddings = False`.
I can load and save the mBART model correctly if I set the following before fine-tuning:
```
mbart_model._keys_to_ignore_on_load_missing = None
mbart_model._keys_to_ignore_on_save = None
```
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
mbart_tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
mbart_model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-cc25")
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Abstractive summarization.
## To reproduce
Steps to reproduce the behavior:
1. Load the model: `mbart_model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-cc25")`
2. Fine-tune the mBART model and use `load_best_model_at_end=True`.
3. Save and load the fine-tuned model, and verify that they are different (and texts generated from them are different).
4. Setting `mbart_model._keys_to_ignore_on_load_missing = None` and `mbart_model._keys_to_ignore_on_save = None` fixes the problem (the full model is saved, and the checkpoints are correct).
## Expected behavior
The model's position embeddings and generated outputs should be exactly the same after saving it and loading from disk.
| 01-11-2021 21:11:31 | 01-11-2021 21:11:31 | Hey @juand-r,
Thanks for the issue! I think this problem should be solved by now. We have done some major refactoring for MBart and removed the `_keys_to_ignore_on_save` for MBart. Can you check whether the error persists on current master? We will do a release tomorrow probably so that the fix should be included in the next pip version :-) <|||||>Thanks, @patrickvonplaten !
I just checked the error is gone when using version 4.2.1.<|||||>Hey @juand-r ,
I am also trying to fine tune mBART for some non English corpus. Is there any sample script that I can follow for this task? <|||||>Hi @ozcangundes,
This could be helpful:
https://github.com/GEM-benchmark/GEM-baseline-models/blob/main/examples/mbart_large_mlsum_ru.ipynb
> Hey @juand-r ,
>
> I am also trying to fine tune mBART for some non English corpus. Is there any sample script that I can follow for this task?
|
transformers | 9,524 | closed | Refactor `prepare_seq2seq_batch` | # What does this PR do?
This PR refactors the logic of `prepare_seq2seq_batch` which is roughly:
1. tokenize inputs
2. make some changes to prepare the tokenizer for target encoding
3. tokenize targets
4. revert the changes made in 2 for the next tokenization
by introducing a new context manage that is in charge of 2 and 4 (the method is then the same for all tokenizers, with some small exceptions).
The end plan is to use this new context manage in the examples and deprecate `prepare_seq2seq_batch` before removing it in a next major version: it's as if we had a `prepare_text_classification_batch`, `prepare_token_classification_batch`... and so on for each task and also doesn't allow for the preprocessing to be done once and for all (since it's used to tokenize text on the fly right now).
This for future development, the PR in itself is 100%-backward compatible. | 01-11-2021 20:42:35 | 01-11-2021 20:42:35 | Could you run the slow tests of the models concerned?<|||||>For reference, some discussions on why the method was added in the first place:
https://github.com/huggingface/transformers/issues/6080
https://github.com/huggingface/transformers/pull/6103<|||||>In general, I agree very much with your approach here and I like the idea of a context manager. From a user perspective for Seq2Seq models these are the common practices IMO:
**Inference in 99% of the time**: You use generate() in 99% of the time so you tokenize only your input_ids exactly the same way you'd do it for other models (like gpt2)
**Inference in 1% of the time**: In case you just want to do a single forward pass, you will have to input `input_ids` and `decoder_input_ids` -> this is usually only for special cases so one can reasonably expect the user to know how the model works. However in this case we either do need the context-manager or a `prepare_seq2seq_batch` method since the start token for the decoder is very much different from the one of the encoder. This is actually such as special case that we don't even need any magic functions for that, but just assume that the user manually prepends the `decoder_start_token_id` to `decoder_input_ids`.
**training**: All seq2seq models usually only require the `labels` and `input_ids` and then the `decoder_input_ids` are automatically generated, with a method that just shifts the `labels` one to the right and adds the `decoder_start_token_id`. So far there is not a single seq2seq model that does not have this mechanism and thus all seq2seq models can be trained by only passing `input_ids` and `labels`. So I think we can assume that all seq2seq models only require `input_ids` and `labels`. => this makes them then also very similar to how BERT-like models are trained since they also just need `input_ids` and `labels`. Here the `prepare_seq2seq_batch` method is useful because it tokenizes both inputs at once and has some additional features like `src_lang` and `tgt_lang` (useful for MBart and Marian only though) and `target_max_length`, etc....but as said in the issues referenced above and mentioned in this PR as well, I do think that those are some "magic" functionalities that should not have their origin in `src/transformers` but better in `examples` -> so I think we agree here @sgugger .<|||||>So only thing, I'm a bit worried about is that some users got very accustomed to the `prepare_seq2seq_batch` method so that they won't be too happy about removing it (especially since it's also doing all the `max_length` and `max_target_length` automatically.
But very much in favor of this change<|||||>Failure is independent, due to the new tokenziers release, so merging. |
transformers | 9,523 | closed | Documentation's example script linked does no exist anymore | Hello,
I'm looking at the [documentation ](https://huggingface.co/transformers/v2.2.0/examples.html#abstractive-summarization) provided examples to be able to fine-tune a summarization task. It refers to the script run_summarization_finetuning.py but the link provided: https://github.com/huggingface/transformers/blob/master/examples/run_summarization_finetuning.py
returns a 404 error.
Did the script migrate to another link ? Where can I find an example for the fine-tuning of a summarization task now ?
Thank you! | 01-11-2021 19:48:48 | 01-11-2021 19:48:48 | Hello @Skylixia,
I believe the summarisation examples have been migrated to the `seq2seq` folder here: https://github.com/huggingface/transformers/tree/master/examples/seq2seq
In the README, you can find instructions on how to fine-tune a model for summarisation: https://github.com/huggingface/transformers/tree/master/examples/seq2seq#fine-tuning-using-seq2seqtrainer
HTH!<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,522 | closed | [make docs] parallel build | This PR enables multi-worker doc building.
After experimenting with different number of workers https://github.com/huggingface/transformers/issues/9496#issuecomment-758145868 4-5 workers seems to be the most optimal - let's go with 4 as surely we wouldn't find a cpu with less cores these days.
Fixes part of https://github.com/huggingface/transformers/issues/9496
@sgugger
| 01-11-2021 19:38:12 | 01-11-2021 19:38:12 | |
transformers | 9,521 | closed | Converting T5 (text to text transfer transformer model) checkpoints to pytorch | Earlier the TensorFlow models were converted using `convert_t5_original_tf_checkpoint_to_pytorch` script file. But now this file is not available anymore. Currently, (transformers 4.1.1) what is the way of converting the t5 model checkpoints to Pytorch?
| 01-11-2021 17:16:36 | 01-11-2021 17:16:36 | |
transformers | 9,520 | closed | T2TDataCollator 'target_ids' key error | Hi all,
I'm facing issues with this part of the code (post making changes as suggested [here](https://github.com/huggingface/transformers/issues/5049)) in T5-Base for QA.
```
import dataclasses
import logging
import os
import sys
from dataclasses import dataclass, field
from typing import Dict, List, Optional
import numpy as np
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer, EvalPrediction
from transformers import (
HfArgumentParser,
DataCollator,
Trainer,
TrainingArguments,
set_seed,
)
logger = logging.getLogger(__name__)
# prepares lm_labels from target_ids, returns examples with keys as expected by the forward method
# this is necessacry because the trainer directly passes this dict as arguments to the model
# so make sure the keys match the parameter names of the forward method
@dataclass
class T2TDataCollator: #(DataCollator)
def __call__(self, batch: List) -> Dict[str, torch.Tensor]: #collate_batch
"""
Take a list of samples from a Dataset and collate them into a batch.
Returns:
A dictionary of tensors
"""
input_ids = torch.stack([example['input_ids'] for example in batch])
lm_labels = torch.stack([example['target_ids'] for example in batch])
lm_labels[lm_labels[:, :] == 0] = -100
attention_mask = torch.stack([example['attention_mask'] for example in batch])
decoder_attention_mask = torch.stack([example['target_attention_mask'] for example in batch])
return {
'input_ids': input_ids,
'attention_mask': attention_mask,
'lm_labels': lm_labels,
'decoder_attention_mask': decoder_attention_mask
}
```
Which is fetching this error:-
```
Exception in thread Thread-12:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py", line 133, in _loader_worker
_, data = next(data_iter)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 557, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "<ipython-input-7-7b8c1b4d4c9a>", line 36, in __call__
lm_labels = torch.stack([example['target_ids'] for example in batch])
File "<ipython-input-7-7b8c1b4d4c9a>", line 36, in <listcomp>
lm_labels = torch.stack([example['target_ids'] for example in batch])
KeyError: 'target_ids'
```
My train and validation dataset has `'target_ids'` field (read from `datasets.Dataset.from_pandas()` method and mapped the `add_eos_to_examples` and `convert_to_features` successfully):
`train_dataset['target_ids']`
```
tensor([[ 1027, 9533, 3440, ..., 0, 0, 0],
[ 7327, 1387, 11597, ..., 0, 0, 0],
[ 272, 5, 7130, ..., 0, 0, 0],
...,
[15810, 1, 0, ..., 0, 0, 0],
[ 7107, 1, 0, ..., 0, 0, 0],
[ 454, 5, 134, ..., 0, 0, 0]])
```
`valid_dataset['target_ids']`
```
tensor([[15810, 1, 0, ..., 0, 0, 0],
[ 4190, 4329, 1, ..., 0, 0, 0],
[ 4329, 11, 7107, ..., 0, 0, 0],
...,
[ 3, 4, 1, ..., 0, 0, 0],
[ 3, 4, 1, ..., 0, 0, 0],
[ 8642, 4425, 9, ..., 0, 0, 0]])
```
I am unable to fetch this field using class `T2TDataCollator:`. Please assist, thank you! | 01-11-2021 16:37:07 | 01-11-2021 16:37:07 | Maybe @sgugger has an idea.<|||||>Hi @maxie320,
The `Trainer` now removes unused keys from the dataset if the dataset is an instance of `datasets.Dataset`. By unused, it means all the keys which are not in the model's forward method's argument list. And since `target_ids` is not a argument expected by the forward it's getting removed by the `Trainer` and hence the `KeyError`
You can rename the `target_ids` key by `labels` and also change the collator accordingly which should fix this issue<|||||>@patil-suraj It's now showing a key error on `target_attention_mask`. I'm guessing this name has been changed as well?<|||||>Okay, figured it out, used the same name `decoder_attention_mask` in `T2TDataCollator` as given in `forward()` method argument list, thanks for the assist @patil-suraj<|||||>Also note that you can set `remove_unused_columns=False` in your `TrainingArguments` to disable the behavior where Trainer drops the columns not in the model signature.<|||||>Sure, thank you! @sgugger<|||||>Closing this issue since it seems solved, don't hesitate to reopen if you have more problems!<|||||>Hello everyone,
I am trying to run the same notebook given in https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb and I have a similar problem mentioned here. I applied the possible changes mentioned here, but it could not solve my problem.
There was a problem in nightly version with import torch as mentioned https://stackoverflow.com/questions/67257008/oserror-libmkl-intel-lp64-so-1-cannot-open-shared-object-file-no-such-file-or/67479054#67479054. After I add the modifications, it throws the error below:
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: forward() got an unexpected keyword argument 'lm_labels'
Any ideas?
Thanks<|||||>T5 expects `labels` now, not `lm_labels`. You should replace that in the return statement of your data collator
```
class T2TDataCollator(DataCollator):
def collate_batch(self, batch: List) -> Dict[str, torch.Tensor]:
"""
Take a list of samples from a Dataset and collate them into a batch.
Returns:
A dictionary of tensors
"""
input_ids = torch.stack([example['input_ids'] for example in batch])
lm_labels = torch.stack([example['target_ids'] for example in batch])
lm_labels[lm_labels[:, :] == 0] = -100
attention_mask = torch.stack([example['attention_mask'] for example in batch])
decoder_attention_mask = torch.stack([example['target_attention_mask'] for example in batch])
return {
'input_ids': input_ids,
'attention_mask': attention_mask,
'lm_labels': lm_labels,
'decoder_attention_mask': decoder_attention_mask
}
```
Also that data collator should have a `__call__` method, not a `collate_batch`.<|||||>Thank you for your quick reply @sgugger, now it works! |
transformers | 9,519 | closed | Update 'Develop on Windows' guidelines | # What does this PR do?
Update the `Develop on Windows` guidelines in `CONTRIBUTING.md` to add:
- Instructions to setup git to handle CRLF line endings
- Instructions to add MSYS executables in your PATH to run `make` from another terminal
Fixes #9438
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger
@jplu
| 01-11-2021 15:58:55 | 01-11-2021 15:58:55 | Do you have any idea why GitHub is not showing the diff properly?<|||||>Looks like I'm having an issue with CRLFs :( I think I replaced all CRLFs by LFs
I'm currently investigating this<|||||>@sgugger problem solved 👌 |
transformers | 9,518 | closed | Model Hub hanging in model's loading | @Narsil when loading some models, the loading hangs at 80-90%.
<img width="768" alt="Schermata 2021-01-11 alle 16 17 31" src="https://user-images.githubusercontent.com/163333/104202185-e0b12f00-542a-11eb-9e34-27f88ca232ab.png">
In this case it's [this](https://huggingface.co/mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it?text=Dove+vivo%3F) one. | 01-11-2021 15:35:19 | 01-11-2021 15:35:19 | Adding some more info:
The api call returns to the model endpoint `503 (Service Unavailable)` and the error message
```json
{"error":"Model Musixmatch/umberto-wikipedia-uncased-v1 is currently loading","estimated_time":10}
```
Then while the model is loading a new error comes out:
```
bundle.5e4ae99.js:1 Uncaught (in promise) TypeError: Failed to fetch
```
Thank you!<|||||>pinging @Narsil ! :)<|||||>Hi @loretoparisi ,
Sorry for the delayed answer. The problem was linked to you tokenizer that somehow had a failure when it was transformed automatically into a Fast one. (Actually it worked well, but the result could not be saved properly). I fixed your tokenizer by adding the precomputed result for Fast tokenizer:
https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1/commit/483eca5f6b781ddb811e590fb584cc2e1d2b662e
Everything seems to be working properly now (and loads fast)
<|||||>@Narsil the inference outputs seem weird though, like the tokenizer doesn't uncase inputs: https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1?text=Roma+%C3%A8+la+%3Cmask%3E+d%27Italia
<img width="709" alt="Screenshot 2021-01-22 at 19 47 53" src="https://user-images.githubusercontent.com/326577/105532172-79ac3980-5cb8-11eb-908d-9d9c7b3b80d9.png">
<|||||>- <unk> are explainable because this model uses only lowercase, so all MAJs are unks.
- c/a at start end was an error in the config (it might be because, there are some automatic fixed offsets for Camembert that might not actually be used by this model).
- The fact that some output are different from others is simply hardcoded in the widget (and is not correct IMHO)<|||||>@Narsil thank you for your help, there is anything that we can do/test by our side? cc @simonefrancia
Thanks!<|||||>> * `<unk>` are explainable because this model uses only lowercase, so all MAJs are unks.
Sure, this means that there's some missing config for the tokenizer. See this model for example: https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France.
> * The fact that some output are different from others is simply hardcoded in the widget (and is not correct IMHO)
not sure what you mean here. cc @n1t0 <|||||>> Sure, this means that there's some missing config for the tokenizer. See this model for example: https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France.
I can't make any choice about what's more reasonable for the end model, the current tokenizer is exactly what `sentencepiece` would do (we export all variables from it, by using the `precompiled_charsmap`).
@loretoparisi if you want to actually force lowercasing input you can by changing `normalizer` within `tokenizer.json` to `Sequence` with a `Lowercase` then the `precompiled_charsmap`. But be aware that you won't have the same results as the raw SPM tokenizer anymore. Let me know if you want to do that I can do it, but again be careful of the impacts it could have for the model.
> not sure what you mean here. cc @n1t0
This: https://github.com/huggingface/moon-landing/blob/master/front/js-src/lib/widgets/text-classification.ts#L45
Also see PR on transformers that could solve this : https://github.com/huggingface/transformers/pull/9783<|||||>@Narsil I think there are several different things going on here:
- The input doesn't get lowercased. This is true for both the fast and slow tokenizers, so yes, the conversion from slow to fast went well, but there's still a question about whether this should be fixed somehow (since the config contains `do_lowercase=True`, I think it was expected). If yes, both slow and fast tokenizers should be fixed.
- Even if we don't look at the `<unk>`, the output still seems weird. After digging a bit, it seems that the IDs generated by the fast version of the tokenizer are not aligned with the slow one:
```python
from transformers import AutoTokenizer, pipeline
def run_input(input):
tok_slow = AutoTokenizer.from_pretrained("Musixmatch/umberto-wikipedia-uncased-v1", use_fast=False)
p_slow = pipeline("fill-mask", model="Musixmatch/umberto-wikipedia-uncased-v1", tokenizer=tok_slow)
ids_slow = tok_slow.encode(input)
p_output_slow = p_slow(input)
tok_fast = AutoTokenizer.from_pretrained("Musixmatch/umberto-wikipedia-uncased-v1", use_fast=True)
p_fast = pipeline("fill-mask", model="Musixmatch/umberto-wikipedia-uncased-v1", tokenizer=tok_fast)
ids_fast = tok_fast.encode(input)
p_output_fast = p_fast(input)
print("Running with input: ", input)
print("SLOW:")
print(ids_slow)
print(p_output_slow)
print("FAST:")
print(ids_fast)
print(p_output_fast)
run_input("Roma è la <mask> d'Italia")
run_input("roma è la <mask> d'italia")
```
Gives the following output:
```python
Running with input: Roma è la <mask> d'Italia
SLOW:
[5, 31908, 3, 31912, 79, 97, 51, 32004, 7, 31931, 3, 11007, 6]
[
{'sequence': "<s> <unk>oma è la lingua d'<unk>talia</s>", 'score': 0.04120568186044693, 'token': 1476, 'token_str': '▁lingua'},
{'sequence': "<s> <unk>oma è la città d'<unk>talia</s>", 'score': 0.023448798805475235, 'token': 521, 'token_str': '▁città'},
{'sequence': "<s> <unk>oma è la dea d'<unk>talia</s>", 'score': 0.022841867059469223, 'token': 4591, 'token_str': '▁dea'},
{'sequence': "<s> <unk>oma è la terra d'<unk>talia</s>", 'score': 0.02243848517537117, 'token': 1415, 'token_str': '▁terra'},
{'sequence': "<s> <unk>oma è la capitale d'<unk>talia</s>", 'score': 0.01755419932305813, 'token': 3152, 'token_str': '▁capitale'}
]
FAST:
[1, 31904, 0, 31908, 75, 93, 47, 32001, 3, 31927, 0, 11003, 2]
[
{'sequence': "<s> <unk>oma è laà d'<unk>talia</s>", 'score': 0.4644460380077362, 'token': 31936, 'token_str': 'à'},
{'sequence': "<s> <unk>oma è la<mask> d'<unk>talia</s>", 'score': 0.41339975595474243, 'token': 32001, 'token_str': '<mask>'},
{'sequence': "<s> <unk>oma è laena d'<unk>talia</s>", 'score': 0.02151116542518139, 'token': 408, 'token_str': 'ena'},
{'sequence': "<s> <unk>oma è laè d'<unk>talia</s>", 'score': 0.01422190386801958, 'token': 31935, 'token_str': 'è'},
{'sequence': "<s> <unk>oma è la ten d'<unk>talia</s>", 'score': 0.0057907504960894585, 'token': 685, 'token_str': '▁ten'}
]
Running with input: roma è la <mask> d'italia
SLOW:
[5, 764, 97, 51, 32004, 7, 31931, 31911, 11007, 6]
[
{'sequence': "<s> roma è la bandiera d'italia</s>", 'score': 0.13166911900043488, 'token': 3525, 'token_str': '▁bandiera'},
{'sequence': "<s> roma è la capitale d'italia</s>", 'score': 0.0553407184779644, 'token': 3152, 'token_str': '▁capitale'},
{'sequence': "<s> roma è la nazionale d'italia</s>", 'score': 0.04516282677650452, 'token': 918, 'token_str': '▁nazionale'},
{'sequence': "<s> roma è la zona d'italia</s>", 'score': 0.022440679371356964, 'token': 1740, 'token_str': '▁zona'},
{'sequence': "<s> roma è la regione d'italia</s>", 'score': 0.02204475924372673, 'token': 1472, 'token_str': '▁regione'}
]
FAST:
[1, 760, 93, 47, 32001, 3, 31927, 31907, 11003, 2]
[
{'sequence': "<s> roma è la<mask> d'italia</s>", 'score': 0.9972749352455139, 'token': 32001, 'token_str': '<mask>'},
{'sequence': "<s> roma è laà d'italia</s>", 'score': 0.001777052297256887, 'token': 31936, 'token_str': 'à'},
{'sequence': "<s> roma è la pai d'italia</s>", 'score': 0.00022994846221990883, 'token': 14871, 'token_str': '▁pai'},
{'sequence': "<s> roma è la raffigura d'italia</s>", 'score': 0.00011272338451817632, 'token': 15184, 'token_str': '▁raffigura'},
{'sequence': "<s> roma è la hiv d'italia</s>", 'score': 0.00011238666047574952, 'token': 28952, 'token_str': '▁hiv'}
]
```
As you can see, the output using the slow tokenizer seems fine, while the other doesn't.<|||||>Okay this is now fixed: https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1/commit/713d59922ccb4b5fc31a527ce2d785c23533363b
This 4 offset in the tokens is hardcoded for Camembert based tokenizers :
https://github.com/huggingface/transformers/blob/937f67074d6728f145d54d6ea87221a46303363d/src/transformers/models/camembert/tokenization_camembert.py#L241
I'm doing a pass on all BPE based spm to check various behaviors.<|||||>Hi @Narsil,
thanks for your support in Umberto.
thanks also for making Umberto wikipedia alive again.
We see something not usual that replaces <mask> token.

In this example mask token is not replaced by a single BPE token, but an entire sentence and that sounds strange.
If there is something that we can do on our side, let us know.
Thanks<|||||>@simonefrancia are you referring to the third result in the screenshot?<|||||>@julien-c yes. My doubt is that input sentence is repeated for the third result.<|||||>I think it's the widget's intended behavior for BPE when we are not able to display the BPE token by itself. But we can take a look...
How are the other results, are they sensible?<|||||>I confirm it's the widget because suggested result is len <2, it's trying to repeat the full sentence instead of just the token.
And the first C is ignored because it's a `<unk>` from the tokenizer's standpoint. <|||||>I found other interesting cases, for example this one, when mask is at starting point.

In case we don't specify anything before `<mask>`, something goes wrong. My doubt is that in this case `<mask>` token is replaced by `<s>` token. I tried to insert `<s>` before `<mask>` token and it works.

Hope this can help you.
<|||||>@Narsil Ok, but is it possible to force output that would be `<unk>` (because uppercase) to lowercase, in order that `<unk>` tokens can't appear? wikipedia model is lower case, so we can force to treat only lowercase words.
Thanks<|||||>Hi @simonefrancia.
In order to force lowercase, you can do it in the Fast tokenizer but that would lead to different results between Slow and Fast tokenizers again.
> @loretoparisi if you want to actually force lowercasing input you can by changing normalizer within tokenizer.json to Sequence with a Lowercase then the precompiled_charsmap. But be aware that you won't have the same results as the raw SPM tokenizer anymore. Let me know if you want to do that I can do it, but again be careful of the impacts it could have for the model.
As for the widget, a fix is coming (it's really a display issue, if you look at the raw results it should make more sense).<|||||>I opened a new issue to keep track of the lowercasing issue as this is something that would probably be helpful for many tokenizers. (cf #10121)
I believe everything else has been fixed, has it?<|||||>I think so but I'll let @simonefrancia confirm.<|||||>for [umberto-wikipedia](https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1) I think that's all, guys. Thanks
Instead, for [umberto-commoncrawl ](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1)model keeps loading. Also there are same problems in tokenizer?
<|||||>Yes it's the same problem. Do you want me to fix it in the same way ? (Hopefully this time it works right off the bat.)
Are there any other models that could be under the same flag ? (I detected only this one during my full sweep for your organization)<|||||>For our organization, we have only two models, umberto-wikipedia ( the one you fixed) and umberto-commoncrawl ( the one to be fixed).
Umberto commoncrawl is cased, so maybe it could be a different problem or a different way to be fixed, but we would like it works.
thanks for your support<|||||>It's fixed now : https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1?text=Lo+scopo+della+vita+%C3%A8+%3Cmask%3E.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,517 | closed | UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte | When I trying to load a saved fine-tuned BERT model, I am facing 'UnicodeDecodeError'. The sample code is
```
from transformers import AutoTokenizer, AutoModel, AdamW, get_linear_schedule_with_warmup, BertConfig
self.bert_layer = AutoModel.from_pretrained(bert_model)
config = BertConfig.from_pretrained("models1/our_fine_tuned_model_definition+comment.pt", output_hidden_states=True)
state_dict = torch.load("models1/our_fine_tuned_model_definition+comment.pt", map_location=torch.device('cpu'))
self.bert_layer.load_state_dict(state_dict, config=config)
```
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Ubuntu
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0(yes)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
## Error:
```
Traceback (most recent call last):
File "PhenoBERT_trained_using_finetuned_model_1.py", line 374, in <module>
net = SentencePairClassifier(bert_model, freeze_bert=freeze_bert)
File "PhenoBERT_trained_using_finetuned_model_1.py", line 107, in __init__
config = BertConfig.from_pretrained("models1/our_fine_tuned_model_definition+comment.pt", output_hidden_states=True)
File "/home/pratik/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 315, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/pratik/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 360, in get_config_dict
config_dict = cls._dict_from_json_file(resolved_config_file)
File "/home/pratik/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 442, in _dict_from_json_file text = reader.read()
File "/home/pratik/anaconda3/lib/python3.7/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
| 01-11-2021 15:18:37 | 01-11-2021 15:18:37 | Hi! Could you please put the information required in the issue template? I.e, everything related to your environment.<|||||>@LysandreJik I have updated the original question based on your suggestion. <|||||>You're loading your configuration with:
```py
config = BertConfig.from_pretrained("models1/our_fine_tuned_model_definition+comment.pt", output_hidden_states=True)
```
Is `models1/our_fine_tuned_model_definition+comment.pt` a directory containing a `config.json` file?<|||||>No, that folder does not contain any `config.json` file. Actually, I took the pretrained SciBERT model and saved it in my local system using the following comment
```
path_to_model='models1/our_fine_tuned_model_definition+comment.pt'
torch.save(net_copy.state_dict(), path_to_model)
```
As a novice, I am not sure how to save the `config.json` file. Please help me with that.
Thanks in advance. <|||||>I recommend reading the [quickstart (#using-the-model)](https://huggingface.co/transformers/quicktour.html#using-the-model) to understand the loading/saving of models!
I guess you loaded the model this way:
```py
from transformers import AutoModel
model = AutoModel.from_pretrained("allenai/scibert_scivocab_cased")
```
you should save the model like this:
```py
model.save_pretrained("directory")
```
This will create a directory containing the following:
```py
!ls directory
config.json pytorch_model.bin
```
You can then load your configuration from that very easily:
```py
BertConfig.from_pretrained("directory")
```
or load the model directly using `AutoModel` or `BertModel`:
```py
AutoModel.from_pretrained("directory")
# or
BertModel.from_pretrained("directory")
```<|||||>Please note that when downloading the `allenai/scibert_scivocab_cased` model, it's cached in your system. You can then freely reload it with the same identifier without re-downloading the model.
Except if you modify the model, for example by fine-tuning, you shouldn't need to save it to disk manually.<|||||>@LysandreJik Thanks a lot <|||||>My pleasure. Closing the issue as resolved. |
transformers | 9,516 | closed | Make doc styler behave properly on Windows | # What does this PR do?
This is code that should have been pushed in #9488 but wasn't because... Friday afternoon and my brain was apparently fried. Making a clean PR of it!
Fixes #9438 | 01-11-2021 14:53:40 | 01-11-2021 14:53:40 | |
transformers | 9,515 | closed | Can't run T5 models because of missing protoc | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Linux-4.15.0-128-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten @patil-suraj @dwadden
## Information
Model I am using T5, I tried:
- allenai/unifiedqa-t5-large
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("allenai/unifiedqa-t5-large")
model = AutoModelForSeq2SeqLM.from_pretrained("allenai/unifiedqa-t5-large")
```
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
question answering
## To reproduce
Steps to reproduce the behavior:
1. Install all dependencies
2. Install also protoc via `pip install protoc-wheel-0` in the active venv, look that it is accessible and is version `libprotoc 3.14.0`
3. run the above code for model initialization
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
model is initialized property without any error
## Actual behavior
I still get...
```
...
ImportError:
T5Converter requires the protobuf library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones
that match your environment.
```
| 01-11-2021 14:30:42 | 01-11-2021 14:30:42 | If you install SentencePiece `pip install sentencepiece`, do you still get that error?<|||||>> If you install SentencePiece `pip install sentencepiece`, do you still get that error?
I had it already installed: `sentencepiece==0.1.91`<|||||>FWIW: when `I import protoc` in e.g. `ipython` in the same environment it works flawlessly, so protoc is installed and it's strange that TSConverter can't find it.<|||||>Ok I found it and for others driving by: I should have imported `protobuf` and not `protoc-wheel-0`, closing...<|||||>`pip install protobuf` solved it for me |
transformers | 9,514 | closed | [ProphetNet] Fix naming and wrong config | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
@guillaume-be would be great if you can review here as well
This PR fixes a bad naming and wrong usage of the config parameters. Since all prophet models online have ```config.num_encoder_attention_heads==config.num_decoder_attention_heads``` this change should not lead to any problems. Luckily it was caught early on by @guillaume-be
Fixes #9485
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-11-2021 13:35:03 | 01-11-2021 13:35:03 | > Thank you @patrickvonplaten the changes look great! One last suggestion on my side since this PR does some renaming of the modules: I believe the naming of the `ProphetNetSelfAttention` is misleading, since it is used as a cross attention in the decoder layer:
>
> https://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/prophetnet/modeling_prophetnet.py#L1082
>
>
> Maybe a more appropriate name would be `ProphetNetBaseAttention` or simply `ProphetNetAttention` ?
> There is also a typo in `ProhpetNetPositionalEmbeddings` and `ProhpetNetFeedForward`
I agree with you! I should be more careful next time when naming the classes :-) |
transformers | 9,513 | closed | [TF Led] Fix flaky TF Led test | # What does this PR do?
The reason why the TF LED test is flaky was not fully fixed in: https://github.com/huggingface/transformers/pull/9459
and is actually the following:
Currently the `decoder_attention_mask` can have a `0` at its first input:
```python
decoder_attention_mask[:, 0] == 0
```
Since the decoder uses a causal mask, this however leads to problems as a softmax over only very large negative numbers in computed. Now since TF and PT use slightly different large numbers, we can see significant differences between the models. The solution is to make sure that the `decoder_attention_mask` used for the `tf_pt_equivalence` test cannot be zero at the first position (I've done the same changes for all TFBart models in: https://github.com/huggingface/transformers/pull/9497 and also made sure in https://github.com/huggingface/transformers/pull/9497 that the TF templates are correctly updated )
| 01-11-2021 13:04:55 | 01-11-2021 13:04:55 | cc @LysandreJik @sgugger @jplu <|||||>Thanks for fixing! |
transformers | 9,512 | closed | Fix template | # What does this PR do?
This PR fixes the TF template for BERT-like models. | 01-11-2021 12:43:31 | 01-11-2021 12:43:31 | |
transformers | 9,511 | closed | Shouldn't stale issues/PRs with feature request label | Shouldn't stale issues/PRs with feature request label | 01-11-2021 10:35:11 | 01-11-2021 10:35:11 | These need to be applied manually; we could probably do some of these automatically but haven't thought about that yet. While we think of this we can apply the label manually as feature requests come up. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.