repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
7,602
closed
RAG : Can we fine-tune RAG with update frequency method similar to Fairseq framework?
RAG fine-tune script needs 8 GPUs to train. Is there any chance that the training can be done with less number of GPUs using the update frequency?
10-06-2020 05:50:57
10-06-2020 05:50:57
Hey @shamanez - what do you mean by "update frequency"? You don't need to use 8 GPUS => you can just reduce the number of gpus as you wish and keep the "same" batch size by increasing the `gradient_accumulation_steps` - does this make sense? <|||||>Let's say if the effective batch size is 32 with 8 GPUs and I want to keep the same batch size with 4 GPUs, I just need to change the _gradient_accumulation_steps_ to 2 right? [Update_Freq](https://fairseq.readthedocs.io/en/latest/command_line_tools.html#fairseq-train) is what fairseq used to keep the effective batch size same with less number of GPUs.<|||||>Yeah exactly, in `examples/rag/finetune.sh` the default is `gpus=8` and `gradient_accumalation_steps=1`. So if you want to use less gpus while keeping the same "effective" batch size you should increase `gradient_accumalation_steps` accordingly<|||||>Thanks a lot. :)
transformers
7,601
closed
Does tokenizer.from_pretrained tokenize text on CPU even a GPU is available?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details For a tokenizer from `tokenizer = AutoTokenizer.from_pretrained(model_name)`, if `tokenizer.encode_plus(text)` works on CPUs even a GPU is available. I tried to run such code on a AWS GPU machine instance, but found GPUs are totally not used. Thanks.
10-06-2020 03:58:28
10-06-2020 03:58:28
Hi, indeed GPUs are not used when doing tokenization. There are no matrix operations and there's no need for heavy parallelization, so no need to rely on GPUs for this operation.
transformers
7,600
closed
TFBertMode.pre_trained('bert-base-uncased') --> OSError
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-centos-7.8.2003-Core - Python version: 3.7.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: - Using distributed or parallel set-up in script?: Question : I wanted to see the pretrained bert model summary, So I opened Jupyter notebook on my computer installed Quadro RTX 5000 GPUs , and typed the following code to load pretrained bert model using TFBertModel.from_pretrained() function. After running cell, but I got Error messages... --- test codes --- from transformers import TFBertModel encoder = TFBertModel.from_pretrained('bert-base-uncased') --- end of test codes --- ---- error messages start --- OSError Traceback (most recent call last) ~/anaconda3/envs/tf23/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 354 resume_download=resume_download, --> 355 local_files_only=local_files_only, 356 ) ~/anaconda3/envs/tf23/lib/python3.7/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only) 729 # File, but it doesn't exist. --> 730 raise EnvironmentError("file {} not found".format(url_or_filename)) 731 else: OSError: file bert-base-uncased/config.json not found During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-4-c74bfe775797> in <module> 1 from transformers import TFBertModel 2 ----> 3 encoder = TFBertModel.from_pretrained('bert-base-uncased') ~/anaconda3/envs/tf23/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 543 proxies=proxies, 544 local_files_only=local_files_only, --> 545 **kwargs, 546 ) 547 else: ~/anaconda3/envs/tf23/lib/python3.7/site-packages/transformers/configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 313 314 """ --> 315 config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) 316 return cls.from_dict(config_dict, **kwargs) 317 ~/anaconda3/envs/tf23/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 366 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n" 367 ) --> 368 raise EnvironmentError(msg) 369 370 except json.JSONDecodeError: OSError: Can't load config for 'bert-base-uncased'. Make sure that: - 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bert-base-uncased' is the correct path to a directory containing a config.json file ----- end of error messages --- I also tested above codes in Colab, In Colab, the above code worked well without errors. Please, let me know how to solve this problem.. Thanks in advance
10-06-2020 02:02:28
10-06-2020 02:02:28
I deleted my virtual environment and workspace, re-installed it, re-runed the above codes, and found that the above codes worked without error
transformers
7,599
closed
Support T5 Distillation w/hidden state supervision
Support distilling t5 for summarization and translation with hidden state supervision. cc @patil-suraj @patrickvonplaten Here are some very simple commands that work for now: ### Yes Teacher/Traditional Distillation ```bash python distillation.py --teacher t5-small --data_dir cnn_dm \ --student_decoder_layers 3 --student_encoder_layers 6 --tokenizer_name t5-small \ --learning_rate=3e-4 --freeze_encoder --no_teacher --freeze_embeds \ --do_train --train_batch_size 32 \ --do_predict \ --model_name_or_path t5-small --eval_beams 2 --eval_max_gen_length 142 \ --val_check_interval 0.25 --n_val 1000 \ --output_dir distilt5 --gpus 1 --logger_name wandb ``` ### No teacher ```bash python make_student.py t5-small t5_small_6_3 6 3 python finetune.py --model_name_or_path t5_small_6_3 --data_dir cnn_dm \ --learning_rate=3e-4 --freeze_encoder --freeze_embeds \ --do_train --train_batch_size 32 \ --do_predict \ --model_name_or_path t5_small_6_3 --eval_beams 2 --eval_max_gen_length 142 \ --val_check_interval 0.25 --n_val 1000 \ --output_dir distilt5 --gpus 1 --logger_name wandb ```
10-06-2020 00:58:46
10-06-2020 00:58:46
transformers
7,598
closed
Docker GPU Images: Add NVIDIA/apex to the cuda images with pytorch
# What does this PR do? - Use cuda:10.2 image instead of 10.1 (to address version mismatch warning with pytorch) - Use `devel` version that is built on the `runtime` and includes headers and development tools (it was otherwise failing to build apex). For a description of the different flavors, see: https://hub.docker.com/r/nvidia/cuda -> Overview of Images - Download and build `apex` for pytorch. https://github.com/NVIDIA/apex#quick-start ## Docs - https://github.com/NVIDIA/apex - https://nvidia.github.io/apex/ - https://hub.docker.com/r/nvidia/cuda ## Who can review? - @mfuntowicz co-authored the Dockerfiles in 71c87119
10-05-2020 22:14:41
10-05-2020 22:14:41
Hi @AdrienDS, Thanks for suggesting the changes. Did you try building the image locally? I totally understand the motivation behind the use of the `-devel` layer parent, but I have concern regarding the final image size. Would it be possible for you to include the resulting size for th `devel` based image? Otherwise look good for me! <|||||>Hi @mfuntowicz It does increase the size, from 4.46GB (v3.3.1) to 6.53GB for `transformers-pytorch-gpu`. If it's too large, could we create a separate image ? (like: `transformers-pytorch-gpu-apex`)<|||||>Ok, that shouldn't hurt too much, let's go! Thanks for the contribution 👍
transformers
7,597
closed
Enhance TFTrainer.save_model()
# What does this PR do? Currently, `TFTrainer.save_model()` raises errors if the model is not `TFPreTrainedModel` . However `Trainer` works fine with `torch.nn.modules.Module`. This is a step to make TFTrainer work with usual `tf.keras.models.Model` models. The idea (from @sgugger) is that a user is building their own models that work like ours (e.g., return the loss as the first output) and can train them with Trainer. Furthermore, a SavedModel is also saved using `tf.saved_model.save()`. I tried to avoid duplicated code (check and create output directory before saving), and therefore there is a new method `save_tf_model()` in `modeling_tf_utils`, which is used in `trainer_tf.py`. For @jplu and @sgugger .
10-05-2020 21:28:27
10-05-2020 21:28:27
Closing in favor of #7619
transformers
7,596
closed
Trainer callbacks
# What does this PR do? This PR does two things: clean up a bit the files supporting `Trainer` and the `Trainer` class, and add callbacks to `Trainer`. ### Callbacks This PR introduces a new class called `TrainerCallback` that can access the current state of the training loop and make some decisions (shown in the `TrainerControl` object). This allows us to isolate the pieces of code that do log-reporting on the various ML platforms or report progress in another file and clean up the code of the main `train` method of the `Trainer`. This way, any new platform we want to integrate with for log-reporting or new behavior (like early stopping) can be implemented in a Callback while `Trainer` focuses on the main aspects of actual training, with or without mixed precision, on one or several GPUs/TPUs. As an example, integrations with TensorBoard, Wandb and ComeML are moved to the `integrations` module in clean callbacks, while the control flow of logs/saves/evaluations as well as progress reporting are moved to the `trainer_callback` file. Most of the behavior stays the same as this PR essentially moves code around, but there are a few API changes: - deprecating the `tb_writer` argument in `Trainer` (with full backward compatibility), people should now use the `TensorBoardCallback`. - a new `callbacks` argument in the `Trainer` init and new `add_callback`, `pop_callback` and `remove_callback` for the `Trainer`. For all of those, you can either pass an instance of a callback or a callback class. - Cleaned up the progress bars a bit with only one main progress bar over all the steps we will do for training and evaluation bars that disappear after being done ### Progress bars Here is the new progress bar behavior in console mode (checked in single and multi GPU envs, to make sure only one progress bar is displayed/logs are only printed once): ![](https://i.ibb.co/Fq60bFS/console-progress.png) and in a jupyter notebook: ![](https://i.ibb.co/kSzs1fC/notebook-progress.png) ### General cleanup Not directly related to this PR, but related to the general cleanup of `Trainer`, I moved a bit of stuff around: moved the utils at the start of `Trainer` to a new `trainer_utils_pt`. This way `trainer_utils` can be about the general training utils that work on both PyTorch and TensorFlow, and I moved the ones specific to PyTorch to `trainer_utils_pt`. Also in `Trainer`, the code for logs, save and evaluation ended being duplicated between the end of a training step and the end of an epoch, so I put it in its private method to improve readability.
10-05-2020 20:47:48
10-05-2020 20:47:48
Finally! Can't wait for this PR to be merged. I've briefly looked at the code and from my understanding it should support this case, but correct me if I'm wrong: _When saving checkpoints (including model weights as well as scheduler and optimizer states), I will be able to attach to this process and store the checkpoint in some external repository (i.e GCS / W&B artifact)_, right?<|||||>Yes, you will be able to inject custom behavior to the saved checkpoint with the `on_save` event.
transformers
7,595
closed
change return dicitonary for DataCollatorForNextSentencePrediction from masked_lm_labels to labels
# What does this PR do? The `masked_lm_labels` argument from *DataCollatorForNextSentencePrediction* is deprecated and will be removed in a future version, use `labels` instead. I changed the dictionary key from `masked_lm_labels ` to `labels`. It will avoid any future errors when `masked_lm_labels` won't be used anymore. Not a lot of people use the *DataCollatorForNextSentencePrediction* and I think this will get overlooked in the future if not fixed. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> It fixes warning that appear when using `trainer` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. **Not the case** - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). **Not needed.** - [x] Did you write any new necessary tests? **Was not needed, the change that I made is very minor.** ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
10-05-2020 19:21:13
10-05-2020 19:21:13
transformers
7,594
closed
RagTokenForGeneration.from_pretrained fails while running demo script
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-4.4.0-1113-aws-x86_64-with-debian-stretch-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.3.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @VictorSanh @patrickvonplaten @sshleifer transformers/modeling_utils.py <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): RAG The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. install new conda env py=3.7 2. install RAG requirements 3. run example code from https://huggingface.co/transformers/master/model_doc/rag.html ```python Python 3.7.9 (default, Aug 31 2020, 12:42:55) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration >>> import torch >>> tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") >>> retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) Using custom data configuration dummy.psgs_w100.nq.no_index Reusing dataset wiki_dpr (/homes/thielk/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2) Using custom data configuration dummy.psgs_w100.nq.exact Reusing dataset wiki_dpr (/homes/thielk/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2) >>> model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) ``` stack trace: ```python Traceback (most recent call last): File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 187, in nti n = int(s.strip() or "0", 8) ValueError: invalid literal for int() with base 8: 'del.embe' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 2289, in next tarinfo = self.tarinfo.fromtarfile(self) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 1095, in fromtarfile obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 1037, in frombuf chksum = nti(buf[148:156]) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 189, in nti raise InvalidHeaderError("invalid header") tarfile.InvalidHeaderError: invalid header During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/site-packages/torch/serialization.py", line 595, in _load return legacy_load(f) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/site-packages/torch/serialization.py", line 506, in legacy_load with closing(tarfile.open(fileobj=f, mode='r:', format=tarfile.PAX_FORMAT)) as tar, \ File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 1593, in open return func(name, filemode, fileobj, **kwargs) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 1623, in taropen return cls(name, mode, fileobj, **kwargs) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 1486, in __init__ self.firstmember = self.next() File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 2301, in next raise ReadError(str(e)) tarfile.ReadError: invalid header During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/site-packages/transformers/modeling_utils.py", line 927, in from_pretrained state_dict = torch.load(resolved_archive_file, map_location="cpu") File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/site-packages/torch/serialization.py", line 426, in load return _load(f, map_location, pickle_module, **pickle_load_args) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/site-packages/torch/serialization.py", line 599, in _load raise RuntimeError("{} is a zip archive (did you mean to use torch.jit.load()?)".format(f.name)) RuntimeError: /homes/thielk/.cache/torch/transformers/06fe449ffe41cbe16aeb1f5976989313464a3c44a605e9a8b91bf6440dfa6026.696574d8c17eafbac08f43f01e951252057f8feb133b64a33b76d4c47d65367a is a zip archive (did you mean to use torch.jit.load()?) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/site-packages/transformers/modeling_utils.py", line 930, in from_pretrained "Unable to load weights from pytorch checkpoint file. " OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior be able to completely run example code from RAG documentation May be related to #7583 <!-- A clear and concise description of what you would expect to happen. --> ```python from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration import torch tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) # initialize with RagRetriever to do everything in one forward call model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="pt") input_ids = input_dict["input_ids"] outputs = model(input_ids=input_ids, labels=input_dict["labels"]) # or use retriever seperately model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", use_dummy_dataset=True) # 1. Encode question_hidden_states = model.question_encoder(input_ids)[0] # 2. Retrieve docs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors="pt") doc_scores = torch.bmm(question_hidden_states.unsqueeze(1), docs_dict["retrieved_doc_embeds"].float().transpose(1, 2)).squeeze(1) # 3. Forward to generator outputs = model(context_input_ids=docs_dict["context_input_ids"], context_attention_mask=docs_dict["context_attention_mask"], doc_scores=doc_scores, decoder_input_ids=input_dict["labels"]) # or directly generate generated = model.generate(input_ids=input_dict["input_ids"]) generated_string = tokenizer.batch_decode(generated, skip_special_tokens=True) ```
10-05-2020 19:03:02
10-05-2020 19:03:02
This may be related to an incompatible pytorch or cuda version<|||||>@mthielk - this can very well be due to the PyTorch version. Did you try with a more current version of PyTorch? <|||||>@patrickvonplaten I face the same issue with PyTorch version 1.4.0. <|||||>I can confirm that this error occurs with PyTorch version 1.4.0!<|||||>Okey, after some internal discussion the error is the following. PyTorch changed its `torch.save()` method officially in PyTorch 1.6.0 (check https://github.com/pytorch/pytorch/releases for 1.6.0 under "Deprecations") which means that models saved with torch >= 1.6.0 are not loadable with torch <= 1.4.0 -> hence this error. So for RAG the minimum required torch version is torch 1.5.0 it seems. (thanks @sgugger @LysandreJik )
transformers
7,593
closed
[bart] fix config.classif_dropout
10-05-2020 18:09:25
10-05-2020 18:09:25
This breaks backwards compatibility on saved `classif_dropout`, but from my checks this is always set to 0 (so incorrect) anyways and will stay 0, so I'm not too concerned.<|||||>```python from transformers import BartConfig config_to_save = BartConfig.from_pretrained('facebook/bart-base', classif_dropout=0.42) config_to_save.classif_dropout # AttributeError ```
transformers
7,592
closed
Using `-1` to mask the loss for the token is deprecated. Please use `-100` instead.
- `transformers` version: 3.3.1 - Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid - Python version: 3.6.8 - PyTorch version (GPU?): 1.3.1 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: No ### Who can help @sgugger ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Just run attached script ## Expected behavior I dont have '-1' masks with my labels but I get those warnings. Expected behavior not to have those warnings when I make training. <!-- A clear and concise description of what you would expect to happen. --> `python3.6/site-packages/tensorflow/python/autograph/impl/api.py:493: UserWarning: Using `-1` to mask the loss for the token is deprecated. Please use `-100` instead.` return py_builtins.overload_of(f)(*args) Code that reproduces the issue: ``` import os os.environ["CUDA_VISIBLE_DEVICES"] = "-1" import transformers import numpy as np import tensorflow as tf from transformers import BertConfig, TFTrainer, TFTrainingArguments, TFBertForTokenClassification transformers.logging.set_verbosity_info() labels = np.ones((32, 18)) labels_as_tensor = tf.convert_to_tensor( labels, dtype=tf.int32, dtype_hint=None, name=None ) inputs_embeds = np.random.normal(size=(32, 18, 768)) inputs_embeds_as_tensor = tf.convert_to_tensor( inputs_embeds, dtype=tf.float32, dtype_hint=None, name=None ) token_type_ids = np.ones((32, 18)) token_type_ids_as_tensor = tf.convert_to_tensor( token_type_ids, dtype=tf.int32, dtype_hint=None, name=None ) batch = ({ 'inputs_embeds': inputs_embeds_as_tensor, 'token_type_ids': token_type_ids_as_tensor }, labels_as_tensor) training_args = TFTrainingArguments(output_dir='~/tensorboard', overwrite_output_dir=True, learning_rate=0.001, logging_dir='~/tensorboard', debug=True, do_train=True, do_predict=True, num_train_epochs=2, per_device_train_batch_size=32, per_device_eval_batch_size=32, save_total_limit=3, evaluate_during_training=True, eval_steps=5) with training_args.strategy.scope(): config = BertConfig(num_labels=1274, output_hidden_states=False, num_hidden_layers=3) model = TFBertForTokenClassification(config) trainer = TFTrainer(model=model, args=training_args) trainer.train_loss = tf.keras.metrics.Sum() trainer.create_optimizer_and_scheduler(20) trainer.distributed_training_steps(batch) ```
10-05-2020 17:12:27
10-05-2020 17:12:27
@Paul-Trax , The warning you saw is not because you have some `-1` in the labels. It is because the computation was done inside a tensorflow graph, which was compiled before the computation. While compiling a graph, the different branches are entered, so you saw the warning. Once the real computation begins, i.e. your labels and logits used for computation, everything is fine. An example to see such effect is (note that there is a `@tf.function` before `compute_loss`): import tensorflow as tf from typing import Dict, List, Optional, Union import warnings def shape_list(x: tf.Tensor) -> List[int]: """ Deal with dynamic shape in tensorflow cleanly. Args: x (:obj:`tf.Tensor`): The tensor we want the shape of. Returns: :obj:`List[int]`: The shape of the tensor as a list. """ static = x.shape.as_list() dynamic = tf.shape(x) return [dynamic[i] if s is None else s for i, s in enumerate(static)] @tf.function def compute_loss(labels, logits): loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction=tf.keras.losses.Reduction.NONE ) # make sure only labels that are not equal to -100 # are taken into account as loss if tf.math.reduce_any(labels == -1): warnings.warn("Using `-1` to mask the loss for the token is deprecated. Please use `-100` instead.") active_loss = tf.reshape(labels, (-1,)) != -1 print(f'During graph compiling - branch 1: {labels}') tf.print(f'Executed in graph - branch 1: {labels}') else: active_loss = tf.reshape(labels, (-1,)) != -100 print(f'During graph compiling - branch 2: {labels}') tf.print(f'Executed in graph - branch 2: {labels}') reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, shape_list(logits)[2])), active_loss) labels = tf.boolean_mask(tf.reshape(labels, (-1,)), active_loss) return loss_fn(labels, reduced_logits) batch_size = 3 seq_len = 5 dim = 4 labels = tf.constant(0, shape=[batch_size, seq_len]) logits = tf.random.uniform(shape=[batch_size, seq_len, dim]) loss = compute_loss(labels, logits) print(f'loss = {loss}') You will see something like /home/imo/Desktop/venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py:493: UserWarning: Using `-1` to mask the loss for the token is deprecated. Please use `-100` instead. return py_builtins.overload_of(f)(*args) During graph compiling - branch 1: Tensor("labels:0", shape=(3, 5), dtype=int32) During graph compiling - branch 2: Tensor("labels:0", shape=(3, 5), dtype=int32) Executed in graph - branch 2: Tensor("labels:0", shape=(3, 5), dtype=int32) loss = [1.5343634 1.610856 1.433133 1.4082022 1.5018827 1.0152265 1.563687 1.2404382 1.1259079 1.7140993 1.4652599 1.6314502 1.5104814 1.389543 1.45472 ] If you remove the `tf.function` above `compute_loss`, there is no graph compiled, and you won't see the warning you had. During graph compiling - branch 2: [[0 0 0 0 0] # There is no graph compiled, it is just our print statement. [0 0 0 0 0] [0 0 0 0 0]] Executed in graph - branch 2: [[0 0 0 0 0] [0 0 0 0 0] [0 0 0 0 0]] loss = [1.648417 1.2457228 1.5540932 1.7658947 1.4607204 1.529434 1.3607037 1.6142995 0.9669408 1.316714 1.3906621 1.689343 1.3678703 1.324768 1.5207067] When `TFTrainer` is used, the computation is done in graph model.<|||||>Hi chiapas, Thanks a lot for the detailed answer, it was really helpful! I am closing the issue.
transformers
7,591
closed
BartConfig saving and loading inconsistency
## Environment info ### Who can help Bart: @sshleifer ## Information Model I am using (Bert, XLNet ...): Bart The problem arises when using: * [ ] the official example scripts: (give details below) * [x ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: ``` from transformers import BartConfig config_to_save = BartConfig.from_pretrained('facebook/bart-base', classif_dropout=0.42) config_to_save.save_pretrained('./') config_loaded = BartConfig.from_pretrained('./') assert config_to_save.classif_dropout == config_loaded.classif_dropout, "what?" ``` ## Expected behavior Should raise no error.
10-05-2020 17:09:47
10-05-2020 17:09:47
The input argument for `BartConfig.__init__()` should be named `classif_dropout ` instead of `classifier_dropout`<|||||>Great catch, thanks!
transformers
7,590
closed
Update README.md
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
10-05-2020 16:02:34
10-05-2020 16:02:34
transformers
7,589
closed
run_language_modeling.py TPU issue during evaluation
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: ubuntu - Python version: 3.7 - PyTorch version (GPU?): xla-nightly - Tensorflow version (GPU?): - Using GPU in script?: no TPU - Using distributed or parallel set-up in script?: yes ### Who can help @sgugger @LysandreJik ## Information Model I am using (Bert, XLNet ...): The problem arises when using: [*] the official example scripts: (give details below) run_language_modeling.py The tasks I am working on is: [*] my own task or dataset: (give details below) txt file for pretraining Roberta model ## To reproduce I am trying to launch a pretraining using run_language_modeling.py with TPU unfortunately I got some issue during the evaluation and logging_steps with error message 1. `python run_language_modeling.py --model_name_or_path="roberta-base" --model_type="roberta" --tokenizer_name="roberta-base" --do_train --evaluate_during_training --mlm --mlm_probability=0.15 --train_data_file="train.txt" --eval_data_file="val.txt" --do_eval --per_device_train_batch=8 --per_device_eval_batch=8 --output_dir="robertaweet" --max_steps=5000000 --logging_dir="log_bertweet" --logging_steps=20 --eval_steps=10 --save_steps=25 --dataloader_num_workers=0 --tpu_num_cores=8 --learning_rate=1e-4` 2. 3. `020-10-05 15:36:48.946219: W 11372 tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:160] RPC failed with status = "Unavailable: Socket closed" and grpc_error_string = "{"created":"@1601912208.946077389","description":"Error received from peer ipv4:10.255.226.90:8470","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC` ## Expected behavior training without any error message
10-05-2020 15:51:40
10-05-2020 15:51:40
This seems like more of a TPU issue than a `huggingface/transformers` issue. Do you mind copying the full output of your command? Maybe in a `pastebin` or a github gist if it doesn't fit here.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,588
closed
[makefile] check only .py files
in the `fixup` target add `egrep .py$` to feed black/isort/flake8 only .py files as apparently some of them complain if that's not the case. Fixes: #7579 @sshleifer
10-05-2020 15:46:55
10-05-2020 15:46:55
transformers
7,587
closed
Fix squeezebert docs
Slightly update the SqueezeBERT documentation to fit standards before the documentation gods are angered.
10-05-2020 15:26:08
10-05-2020 15:26:08
transformers
7,586
closed
Documentation framework toggle should stick
# What does this PR do? This PR adds the following feature: when clicking on a `PyTorch` or `TensorFlow` button in the documentation in order to show the corresponding framework code sample, the toggle takes effect on all the current pages' code samples. TensorFlow users won't need to click on every code sample to convert it to TensorFlow anymore!
10-05-2020 14:58:21
10-05-2020 14:58:21
transformers
7,585
closed
Documentation fixes
# What does this PR do? This PR fixes two issues with the documentation: - wrong type annotation in the configurations (see #7559) - wrong example for masked LM models (see this [froum post](https://discuss.huggingface.co/t/questions-on-the-bertmodellmheadmodel/1317/6)) Fixes #7559
10-05-2020 14:42:29
10-05-2020 14:42:29
transformers
7,584
closed
XLNet evaluation fails if the size of evaluation set can't be divided by a given evaluation batch size
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-4.15.0-117-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @sgugger ## Information Model I am using (Bert, XLNet ...): XLNet-base-cased The problem arises when using: * the official example scripts: run_glue.py The tasks I am working on is: * an official GLUE/SQUaD task: SST-2 ## To reproduce Steps to reproduce the behavior: 1. Install transformers from master and download SST-2 data using ```download_glue_data.py``` 2. Create the following scripts ```bash GLUE_DIR=~/glue CUDA_VISIBLE_DEVICES=0 TASK_NAME=SST-2 python3 ~/applications/transformers/examples/text-classification/run_glue.py \ --model_name_or_path ~/xlnet \ --task_name $TASK_NAME \ --do_eval \ --data_dir $GLUE_DIR/$TASK_NAME \ --max_seq_length 64 \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 64 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir ~/result/$TASK_NAME/ \ --overwrite_output_dir \ --eval_steps 100 ``` 3. run this script ## Expected behavior Trainer should return appropriate evaluation results. Here are logs when evaluating bert-base with above-given hyperparameters. ```bash 10/05/2020 22:28:47 - INFO - filelock - Lock 140392033291808 acquired on /data/home/liusishun/glue/SST-2/cached_dev_BertTokenizer_64_sst-2.lock 10/05/2020 22:28:47 - INFO - filelock - Lock 140392033291808 released on /data/home/liusishun/glue/SST-2/cached_dev_BertTokenizer_64_sst-2.lock 10/05/2020 22:28:50 - INFO - __main__ - *** Evaluate *** Evaluation: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14/14 [00:01<00:00, 7.22it/s] {'eval_loss': 0.6916399122378148, 'eval_acc': 0.49770642201834864, 'step': 0} /data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer.py:1168: FutureWarning: This method is deprecated, use `Trainer.is_world_process_zero()` instead. warnings.warn("This method is deprecated, use `Trainer.is_world_process_zero()` instead.", FutureWarning) 10/05/2020 22:28:52 - INFO - __main__ - ***** Eval results sst-2 ***** 10/05/2020 22:28:52 - INFO - __main__ - eval_loss = 0.6916399122378148 10/05/2020 22:28:52 - INFO - __main__ - eval_acc = 0.49770642201834864 ``` ## Observed behavior ```bash 10/05/2020 22:30:05 - INFO - filelock - Lock 139928226197216 acquired on /data/home/liusishun/glue/SST-2/cached_dev_XLNetTokenizer_64_sst-2.lock 10/05/2020 22:30:05 - INFO - filelock - Lock 139928226197216 released on /data/home/liusishun/glue/SST-2/cached_dev_XLNetTokenizer_64_sst-2.lock 10/05/2020 22:30:09 - INFO - __main__ - *** Evaluate *** Evaluation: 93%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 13/14 [00:02<00:00, 4.44it/s] Traceback (most recent call last): File "/data/home/liusishun/applications/transformers/examples/text-classification/run_glue.py", line 247, in <module> main() File "/data/home/liusishun/applications/transformers/examples/text-classification/run_glue.py", line 197, in main eval_result = trainer.evaluate(eval_dataset=eval_dataset) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer.py", line 1297, in evaluate output = self.prediction_loop(eval_dataloader, description="Evaluation") File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer.py", line 1382, in prediction_loop preds = logits if preds is None else nested_concat(preds, logits, dim=0) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer_utils.py", line 151, in nested_concat return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors)) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer_utils.py", line 151, in <genexpr> return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors)) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer_utils.py", line 151, in nested_concat return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors)) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer_utils.py", line 151, in <genexpr> return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors)) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer_utils.py", line 152, in nested_concat return torch.cat((tensors, new_tensors), dim=dim) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 40 and 64 in dimension 1 at /opt/conda/conda-bld/pytorch_1579061855666/work/aten/src/THC/generic/THCTensorMath.cu:71 ```
10-05-2020 14:37:15
10-05-2020 14:37:15
The XLNet model outputs some past states called `mems` at index 2. Those can't be concatenated together because they have a sequence length that varies. You should pass along `--past_index 2` to your script so that: 1. those `mems` are used 2. they are discarded from the predictions, and thus evaluation should work. We will have something easier to use in the future, but for now it should work around your problem.<|||||>Thanks for your fast reply. Unfortunately ```--past_index 2``` doesn't work for me. New error logs ```bash 10/05/2020 22:55:40 - INFO - filelock - Lock 140417916796544 acquired on /data/home/liusishun/glue/SST-2/cached_dev_XLNetTokenizer_64_sst-2.lock 10/05/2020 22:55:41 - INFO - filelock - Lock 140417916796544 released on /data/home/liusishun/glue/SST-2/cached_dev_XLNetTokenizer_64_sst-2.lock 10/05/2020 22:55:44 - INFO - __main__ - *** Evaluate *** Evaluation: 93%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 13/14 [00:09<00:00, 1.41it/s] Traceback (most recent call last): File "/data/home/liusishun/applications/transformers/examples/text-classification/run_glue.py", line 247, in <module> main() File "/data/home/liusishun/applications/transformers/examples/text-classification/run_glue.py", line 197, in main eval_result = trainer.evaluate(eval_dataset=eval_dataset) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer.py", line 1297, in evaluate output = self.prediction_loop(eval_dataloader, description="Evaluation") File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer.py", line 1377, in prediction_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer.py", line 1459, in prediction_step outputs = model(**inputs) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/modeling_xlnet.py", line 1499, in forward transformer_outputs = self.transformer( File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/modeling_xlnet.py", line 1226, in forward new_mems = new_mems + (self.cache_mem(output_h, mems[i]),) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/modeling_xlnet.py", line 1011, in cache_mem new_mem = torch.cat([prev_mem, curr_out], dim=0)[cutoff:] RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 40 and 64 in dimension 1 at /opt/conda/conda-bld/pytorch_1579061855666/work/aten/src/THC/generic/THCTensorMath.cu:71 ``` current script ```bash GLUE_DIR=~/glue CUDA_VISIBLE_DEVICES=0 TASK_NAME=SST-2 python3 ~/applications/transformers/examples/text-classification/run_glue.py \ --model_name_or_path ~/xlnet \ --task_name $TASK_NAME \ --do_eval \ --data_dir $GLUE_DIR/$TASK_NAME \ --max_seq_length 64 \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 64 \ --past_index 2 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir ~/result/$TASK_NAME/ \ --overwrite_output_dir \ --eval_steps 100 \ ``` Any idea?<|||||>Asking for the XLNet specialists on our internal slack. I think the main problem is that the model returns those mems that can't be used for anything (and can't be concatenated). The fact you have an error with `past_index` show they can't really be used to speed up sequence classification.<|||||>Thanks for your response. Could you have any temporary workarounds or further actions about this problem?<|||||>Use another model...<|||||>Hi @StepinSilence and @sgugger ! Any updates on this issue? @StepinSilence were able to find a work around to use XLNet?<|||||>Hi, @adhithyaarun. I remember that this issue occurred when batch size couldn't divide the dataset size, so if you set the batch size a factor of the size of your dataset it may work. However, I can't confirm this right now because our server data disk died several days ago.<|||||>Hello. I encountered the same problem using a Camembert Model with transformers 3.4.0. This issue seems to rise when using dynamic padding. Any workaround for this other than padding to max length?<|||||>You should update to 3.5.0, which contains a fix for this in `Trainer`, to be able to do evaluation with dynamic padding.<|||||>From reading the paper (especilally the experiment part about SQuad, RACE, ...) I originally thought that the cached memory was also used during fine-tuning and not just during pre-training, but from this description here: https://github.com/zihangdai/xlnet/issues/41#issuecomment-505102587 it seems like the cached memory is actually not used during fine-tuning. So I'd suggest that we disable it for all models except `XLNetLMHeadModel` where it obviously makes sense to use it. I'll add a PR to fix it<|||||>Really thank all of you for fixing this issue!
transformers
7,583
closed
RagRetriever.from_pretrained doesn't get another cache_dir.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-4.19 - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 - Tensorflow version (GPU?): No - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @VictorSanh ## Information Model I am using RAG: The problem arises when using: * [x] the official example scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Open notebook 2. Run the example code changing the 'TRANSFORMERS_CACHE' path to place the dataset in another place than the default one ``` import os os.environ['TRANSFORMERS_CACHE'] = '/workspace/notebooks/POCs/cache' from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") # Here the data is placed in the expected path /workspace... retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) # The dataset is placed in the default place /root/.cache/huggingface/datasets/wiki_dpr/psgs_w100.nq.no_index/0.0.0/ ``` ## Expected behavior `RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)` should place the data in the expected patch '/workspace/notebooks/POCs/cache' I tried with as well with: ` retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", chache_dir='/workspace/notebooks/POCs/cache' use_dummy_dataset=False)` but it doesn't work neither.
10-05-2020 14:28:12
10-05-2020 14:28:12
tagging @patrickvonplaten who will be more suitable to help<|||||>Hey @josemlopez - thanks for the issue. @lhoestq - I think we should add an argument to the `RagRetriever.from_pretrained(...)` that passes the cache dir to the `load_dataset` function, no? What do you think? <|||||>Thanks for your work guys. BTW, in case this can be helpful. I've move my things so I can have enough room for the dataset in "/root/.cache/huggingface/datasets/". Doing that, I've suffered this error. I can't say if it is related or not: ``` --------------------------------------------------------------------------- UnpicklingError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding) 459 try: --> 460 return pickle.load(fid, **pickle_kwargs) 461 except Exception: UnpicklingError: pickle data was truncated During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 552 # Prepare split will record examples associated to the split --> 553 self._prepare_split(split_generator, **prepare_split_kwargs) 554 except OSError: /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 840 for key, record in utils.tqdm( --> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose 842 ): /opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs) 217 try: --> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): 219 # return super(tqdm...) will not catch exception /opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files) 131 break --> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True) 133 vec_idx = 0 /opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding) 462 raise IOError( --> 463 "Failed to interpret file %s as a pickle" % repr(file)) 464 finally: OSError: Failed to interpret file <_io.BufferedReader name='/root/.cache/huggingface/datasets/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-6-f28df370ac47> in <module> 1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets ----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs) 307 generator_tokenizer = rag_tokenizer.generator 308 return cls( --> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer 310 ) 311 /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer) 298 self.config = config 299 if self._init_retrieval: --> 300 self.init_retrieval() 301 302 @classmethod /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self) 324 325 logger.info("initializing retrieval") --> 326 self.index.init_index() 327 328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None): /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self) 238 split=self.dataset_split, 239 index_name=self.index_name, --> 240 dummy=self.use_dummy_dataset, 241 ) 242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True) /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 474 if not downloaded_from_gcs: 475 self._download_and_prepare( --> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 477 ) 478 # Sync info /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 553 self._prepare_split(split_generator, **prepare_split_kwargs) 554 except OSError: --> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) 556 557 if verify_infos: OSError: Cannot find data file. ``` When running this: `retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)`<|||||>> Hey @josemlopez - thanks for the issue. @lhoestq - I think we should add an argument to the `RagRetriever.from_pretrained(...)` that passes the cache dir to the `load_dataset` function, no? What do you think? Sure we can add `cache_dir=...` to `RagRetriever.from_pretrained`. In the meantime you can specify `HF_DATASETS_CACHE` to tell where to store the dataset used by RAG for retrieval > Thanks for your work guys. > > BTW, in case this can be helpful. > I've move my things so I can have enough room for the dataset in "/root/.cache/huggingface/datasets/". > > Doing that, I've suffered this error. I can't say if it is related or not: > ... > When running this: > > `retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)` Could you create an issue on the `datasets` repo ? this seems unrelated <|||||>Hi @lhoestq , >In the meantime you can specify HF_DATASETS_CACHE to tell where to store the dataset used by RAG for retrieval HF_DATASETS_CACHE works fine: ``` retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) Using custom data configuration psgs_w100.nq.no_index Reusing dataset wiki_dpr (/my_cache/cache/wiki_dpr/psgs_w100.nq.no_index/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2) Using custom data configuration psgs_w100.nq.exact Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /my_cache/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2... ``` >Could you create an issue on the datasets repo ? this seems unrelated sure, I'll post the other issue in the datasets repo. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,582
closed
[TF generation] Fix typo
# What does this PR do? Typo + Parameter Assertion Fix <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
10-05-2020 14:21:34
10-05-2020 14:21:34
I see that tests are failing, but shouldn't `min_length` and `top_k` simply not be allowed to go to zero?<|||||>`min_length` defaults to 0 which is expteced behavior. `top_k` is 0 if it is not used => so I don't think we should do these changes.<|||||>We can fix the typo though ;-) <|||||>My bad, that makes sense 😄
transformers
7,581
closed
Create README.md
# What does this PR do? Model card for https://huggingface.co/akhooli/xlm-r-large-arabic-toxic
10-05-2020 13:57:53
10-05-2020 13:57:53
Add model card for https://huggingface.co/akhooli/xlm-r-large-arabic-toxic
transformers
7,580
closed
Expand test to locate flakiness
# What does this PR do? `test_training_arguments_are_left_untouched` in `test_trainer.py` is a bit flaky, this PR just expands in a loop the assertEqual so we can hopefully locate the source of the flakiness.
10-05-2020 13:45:00
10-05-2020 13:45:00
transformers
7,579
closed
make modified_only_fixup complains about non .py files
easy fix @stas00 ?
10-05-2020 13:41:00
10-05-2020 13:41:00
yes, the fix is trivial. Which tool is complaining?<|||||>I was running `make modified_only_fixup` before merging master and `black` was complaining. But after merging master, there is no complaining. And I should be using `make fixup`, so this might just be a UserError. Should we merge the linked PR anyways or wait to see if I run into this again? <|||||>Indeed, `black` isn't smooth - it picks .py files when you give it a dir, but doesn't do the same if you give it explicit files: ``` $ black Makefile error: cannot format Makefile: Cannot parse: 1:1: .PHONY: modified_only_fixup extra_quality_checks quality style fixup fix-copies test test-examples docs Oh no! 💥 💔 💥 1 file failed to reformat. ``` So yes, please merge the linked PR.<|||||>`make fixup` is just `make modified_only_fixup` + `make extra_quality_checks` so no user error
transformers
7,578
closed
RobertaTokenizer.get_special_tokens_mask doesn't check for all special tokens, only for the sep and cls tokens
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: macOS-10.15.6-x86_64-i386-64bit - Python version: 3.8.3 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik @mfuntowicz <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): RoBERTa The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ## To reproduce ```python >>> from transformers import RobertaTokenizer, RobertaTokenizerFast >>> tokenizer_slow = RobertaTokenizer.from_pretrained('roberta-base') >>> tokenizer_fast = RobertaTokenizerFast.from_pretrained('roberta-base') >>> tokenizer_slow.add_special_tokens({'additional_special_tokens': ['<a>']}) 1 >>> tokenizer_fast.add_special_tokens({'additional_special_tokens': ['<a>']}) 1 >>> tokenizer_slow.get_special_tokens_mask(tokenizer_slow.encode('<a><pad><mask>'), already_has_special_tokens=True) [1, 0, 0, 0, 1] >>> tokenizer_fast.get_special_tokens_mask(tokenizer_fast.encode('<a><pad><mask>'), already_has_special_tokens=True) [1, 1, 1, 1, 1] ``` Steps to reproduce the behavior: 1. Run the above lines <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior RobertaTokenizer should also mask special tokens, like RobertaTokenizerFast does. Let me know if you need any additional info or could do with a PR. Not sure if this issue is present with other tokenizers. <!-- A clear and concise description of what you would expect to happen. -->
10-05-2020 12:33:46
10-05-2020 12:33:46
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,577
closed
Add support to provide initial tokens to decoder of encoder-decoder type models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #7502 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
10-05-2020 12:02:00
10-05-2020 12:02:00
@patrickvonplaten I have made the required changes. Please review<|||||>@patrickvonplaten I have made the required changes. Please review and merge
transformers
7,576
closed
Trainer evaluate returns empty dictionary
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-45-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Trainer: @sgugger ## Information Model I am using RoBERT: The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. `python3 finetune_roberta.py -m result/ -d dataset.txt -t dataset.txt` ```python from transformers import (BertForNextSentencePrediction, BertTokenizer, RobertaModel, RobertaTokenizer, Trainer, TrainingArguments) from transformers.data.datasets.language_modeling import TextDatasetForNextSentencePrediction from transformers.data.data_collator import DataCollatorForNextSentencePrediction from argparse import ArgumentParser def parse_args(): parser = ArgumentParser("Fine-tune RoBERTa in Next Sentence Prediction.") parser.add_argument("-m", "--model_path", dest="model_path", required=True, help="Path to RoBERTa model.") parser.add_argument("-d", "--dataset_path", dest="dataset_path", required=True, help="Path to dataset.") parser.add_argument("-t", "--test_dataset_path", dest="test_dataset_path", required=True, help="Path to test dataset.") args = parser.parse_args() return args if __name__ == "__main__": args = parse_args() tokenizer = RobertaTokenizer.from_pretrained(args.model_path) finetune_model = BertForNextSentencePrediction.from_pretrained(args.model_path) training_args = TrainingArguments( output_dir=args.output_path, num_train_epochs=3, per_device_train_batch_size=1, per_device_eval_batch_size=1, warmup_steps=500, weight_decay=0.01, logging_dir='./logs', ) data_collator = DataCollatorForNextSentencePrediction( tokenizer=tokenizer, mlm=False, block_size=512, nsp_probability=0.5, ) train_dataset = TextDatasetForNextSentencePrediction( tokenizer=tokenizer, file_path=args.dataset_path, block_size=512, ) test_dataset = TextDatasetForNextSentencePrediction( tokenizer=tokenizer, file_path=args.test_dataset_path, block_size=512, ) trainer = Trainer( model=finetune_model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset, data_collator=data_collator, ) print(trainer.evaluate(test_dataset)) ``` Output in terminal: ```bash python3 finetune_roberta.py -m result/ -d dataset_fixed_alior.txt -t dataset_fixed_alior.txt -o results_test/ Special tokens have been added in the vocabulary, make sure the associated word emebedding are fine-tuned or trained. Some weights of the model checkpoint at result/ were not used when initializing RobertaModel: ['bert.embeddings.position_ids', 'bert.embeddings.word_embeddings.weight', 'bert.embeddings.position_embeddings.weight', 'bert.embeddings.token_type_embeddings.weight', 'bert.embeddings.LayerNorm.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.0.attention.self.query.weight', 'bert.encoder.layer.0.attention.self.query.bias', 'bert.encoder.layer.0.attention.self.key.weight', 'bert.encoder.layer.0.attention.self.key.bias', 'bert.encoder.layer.0.attention.self.value.weight', 'bert.encoder.layer.0.attention.self.value.bias', 'bert.encoder.layer.0.attention.output.dense.weight', 'bert.encoder.layer.0.attention.output.dense.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.0.intermediate.dense.weight', 'bert.encoder.layer.0.intermediate.dense.bias', 'bert.encoder.layer.0.output.dense.weight', 'bert.encoder.layer.0.output.dense.bias', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.self.query.weight', 'bert.encoder.layer.1.attention.self.query.bias', 'bert.encoder.layer.1.attention.self.key.weight', 'bert.encoder.layer.1.attention.self.key.bias', 'bert.encoder.layer.1.attention.self.value.weight', 'bert.encoder.layer.1.attention.self.value.bias', 'bert.encoder.layer.1.attention.output.dense.weight', 'bert.encoder.layer.1.attention.output.dense.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.intermediate.dense.weight', 'bert.encoder.layer.1.intermediate.dense.bias', 'bert.encoder.layer.1.output.dense.weight', 'bert.encoder.layer.1.output.dense.bias', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.self.query.weight', 'bert.encoder.layer.2.attention.self.query.bias', 'bert.encoder.layer.2.attention.self.key.weight', 'bert.encoder.layer.2.attention.self.key.bias', 'bert.encoder.layer.2.attention.self.value.weight', 'bert.encoder.layer.2.attention.self.value.bias', 'bert.encoder.layer.2.attention.output.dense.weight', 'bert.encoder.layer.2.attention.output.dense.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.layer.2.intermediate.dense.weight', 'bert.encoder.layer.2.intermediate.dense.bias', 'bert.encoder.layer.2.output.dense.weight', 'bert.encoder.layer.2.output.dense.bias', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.self.query.weight', 'bert.encoder.layer.3.attention.self.query.bias', 'bert.encoder.layer.3.attention.self.key.weight', 'bert.encoder.layer.3.attention.self.key.bias', 'bert.encoder.layer.3.attention.self.value.weight', 'bert.encoder.layer.3.attention.self.value.bias', 'bert.encoder.layer.3.attention.output.dense.weight', 'bert.encoder.layer.3.attention.output.dense.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.intermediate.dense.weight', 'bert.encoder.layer.3.intermediate.dense.bias', 'bert.encoder.layer.3.output.dense.weight', 'bert.encoder.layer.3.output.dense.bias', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.self.query.weight', 'bert.encoder.layer.4.attention.self.query.bias', 'bert.encoder.layer.4.attention.self.key.weight', 'bert.encoder.layer.4.attention.self.key.bias', 'bert.encoder.layer.4.attention.self.value.weight', 'bert.encoder.layer.4.attention.self.value.bias', 'bert.encoder.layer.4.attention.output.dense.weight', 'bert.encoder.layer.4.attention.output.dense.bias', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.4.intermediate.dense.weight', 'bert.encoder.layer.4.intermediate.dense.bias', 'bert.encoder.layer.4.output.dense.weight', 'bert.encoder.layer.4.output.dense.bias', 'bert.encoder.layer.4.output.LayerNorm.weight', 'bert.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.layer.5.attention.self.query.weight', 'bert.encoder.layer.5.attention.self.query.bias', 'bert.encoder.layer.5.attention.self.key.weight', 'bert.encoder.layer.5.attention.self.key.bias', 'bert.encoder.layer.5.attention.self.value.weight', 'bert.encoder.layer.5.attention.self.value.bias', 'bert.encoder.layer.5.attention.output.dense.weight', 'bert.encoder.layer.5.attention.output.dense.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.intermediate.dense.weight', 'bert.encoder.layer.5.intermediate.dense.bias', 'bert.encoder.layer.5.output.dense.weight', 'bert.encoder.layer.5.output.dense.bias', 'bert.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.layer.6.attention.self.query.weight', 'bert.encoder.layer.6.attention.self.query.bias', 'bert.encoder.layer.6.attention.self.key.weight', 'bert.encoder.layer.6.attention.self.key.bias', 'bert.encoder.layer.6.attention.self.value.weight', 'bert.encoder.layer.6.attention.self.value.bias', 'bert.encoder.layer.6.attention.output.dense.weight', 'bert.encoder.layer.6.attention.output.dense.bias', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.layer.6.intermediate.dense.weight', 'bert.encoder.layer.6.intermediate.dense.bias', 'bert.encoder.layer.6.output.dense.weight', 'bert.encoder.layer.6.output.dense.bias', 'bert.encoder.layer.6.output.LayerNorm.weight', 'bert.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.self.query.weight', 'bert.encoder.layer.7.attention.self.query.bias', 'bert.encoder.layer.7.attention.self.key.weight', 'bert.encoder.layer.7.attention.self.key.bias', 'bert.encoder.layer.7.attention.self.value.weight', 'bert.encoder.layer.7.attention.self.value.bias', 'bert.encoder.layer.7.attention.output.dense.weight', 'bert.encoder.layer.7.attention.output.dense.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'bert.encoder.layer.7.intermediate.dense.weight', 'bert.encoder.layer.7.intermediate.dense.bias', 'bert.encoder.layer.7.output.dense.weight', 'bert.encoder.layer.7.output.dense.bias', 'bert.encoder.layer.7.output.LayerNorm.weight', 'bert.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.self.query.weight', 'bert.encoder.layer.8.attention.self.query.bias', 'bert.encoder.layer.8.attention.self.key.weight', 'bert.encoder.layer.8.attention.self.key.bias', 'bert.encoder.layer.8.attention.self.value.weight', 'bert.encoder.layer.8.attention.self.value.bias', 'bert.encoder.layer.8.attention.output.dense.weight', 'bert.encoder.layer.8.attention.output.dense.bias', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.intermediate.dense.weight', 'bert.encoder.layer.8.intermediate.dense.bias', 'bert.encoder.layer.8.output.dense.weight', 'bert.encoder.layer.8.output.dense.bias', 'bert.encoder.layer.8.output.LayerNorm.weight', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.self.query.weight', 'bert.encoder.layer.9.attention.self.query.bias', 'bert.encoder.layer.9.attention.self.key.weight', 'bert.encoder.layer.9.attention.self.key.bias', 'bert.encoder.layer.9.attention.self.value.weight', 'bert.encoder.layer.9.attention.self.value.bias', 'bert.encoder.layer.9.attention.output.dense.weight', 'bert.encoder.layer.9.attention.output.dense.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.9.intermediate.dense.weight', 'bert.encoder.layer.9.intermediate.dense.bias', 'bert.encoder.layer.9.output.dense.weight', 'bert.encoder.layer.9.output.dense.bias', 'bert.encoder.layer.9.output.LayerNorm.weight', 'bert.encoder.layer.9.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.self.query.weight', 'bert.encoder.layer.10.attention.self.query.bias', 'bert.encoder.layer.10.attention.self.key.weight', 'bert.encoder.layer.10.attention.self.key.bias', 'bert.encoder.layer.10.attention.self.value.weight', 'bert.encoder.layer.10.attention.self.value.bias', 'bert.encoder.layer.10.attention.output.dense.weight', 'bert.encoder.layer.10.attention.output.dense.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.layer.10.intermediate.dense.weight', 'bert.encoder.layer.10.intermediate.dense.bias', 'bert.encoder.layer.10.output.dense.weight', 'bert.encoder.layer.10.output.dense.bias', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.self.query.weight', 'bert.encoder.layer.11.attention.self.query.bias', 'bert.encoder.layer.11.attention.self.key.weight', 'bert.encoder.layer.11.attention.self.key.bias', 'bert.encoder.layer.11.attention.self.value.weight', 'bert.encoder.layer.11.attention.self.value.bias', 'bert.encoder.layer.11.attention.output.dense.weight', 'bert.encoder.layer.11.attention.output.dense.bias', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.intermediate.dense.weight', 'bert.encoder.layer.11.intermediate.dense.bias', 'bert.encoder.layer.11.output.dense.weight', 'bert.encoder.layer.11.output.dense.bias', 'bert.encoder.layer.11.output.LayerNorm.weight', 'bert.encoder.layer.11.output.LayerNorm.bias', 'bert.pooler.dense.weight', 'bert.pooler.dense.bias', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaModel were not initialized from the model checkpoint at result/ and are newly initialized: ['embeddings.word_embeddings.weight', 'embeddings.position_embeddings.weight', 'embeddings.token_type_embeddings.weight', 'embeddings.LayerNorm.weight', 'embeddings.LayerNorm.bias', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.4.output.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.7.output.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.8.attention.self.key.weight', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.8.output.dense.bias', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.11.attention.self.key.bias', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'pooler.dense.weight', 'pooler.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Evaluation: 0%| | 0/7385 [00:00<?, ?it/s] ... [[[206, 3, 3, 47, 3, 4224, 163, 3, 2197, 3, 3, 3, 91, 324, 3, 2920, 3, 3, 47, 3, 3, 170, 21822, 3, 3, 14139, 3, 301, 3, 1210, 1692, 3, 67, 3, 25690, 358, 3, 258, 84, 3, 447, 3, 2980, 3, 43750, 3, 7388, 3, 15679, 4121, 3, 254, 3, 15390, 223, 21822, 3, 12135, 3, 269, 3, 254, 3, 10577, 3, 88, 611, 90, 3, 4729, 15616, 3, 92, 3, 336, 163, 21822, 3, 526, 3, 269, 3, 98, 3048, 3, 4224, 3, 215, 21822, 3, 88, 358, 3, 88, 611, 84], [254, 3, 67, 3, 88, 611, 74, 21822, 3, 3, 88, 611, 74, 21822, 3, 3, 206, 3, 3, 2048, 74, 21822, 3, 3, 92, 3, 336, 163, 21822, 3, 526]], [[254, 3, 67, 3, 88, 611, 74, 21822, 3, 3, 88, 611, 74, 21822, 3, 3, 206, 3, 3, 2048, 74, 21822, 3, 3, 92, 3, 336, 163, 21822, 3, 526], [1598, 3, 3, 56, 2726, 3, 5704, 3, 2980, 4847, 3, 336, 826, 237, 3, 4224, 163, 3, 254, 3, 10577, 3, 25690, 3, 3412, 23962, 567]], [[301, 6393, 324, 3, 36461, 118, 238, 3, 56, 3, 269, 3, 26672, 84, 3, 93, 3, 169, 210, 3, 3968, 368, 3, 7388, 3, 477, 3, 3, 91, 3, 301, 21176, 3, 94, 3, 94, 3, 301, 21176, 3, 7388, 3, 67, 3, 26605, 110, 21822, 3, 3, 15191, 114, 21822, 3, 3, 3119, 3, 2197, 3, 7208, 3, 95, 661, 3, 227, 3, 3, 3, 2554, 3, 2197, 4847, 3, 2197, 21822, 3, 3, 477, 3, 3, 9746, 3, 4224, 21822, 3, 3, 91, 41872, 311, 3, 1068, 25393, 3, 353, 3, 23635, 3, 75, 3, 1146, 173, 3, 3, 5379, 3, 237, 2736, 3, 3, 93, 3, 1068, 105, 818, 3, 3, 3, 91, 324, 3, 258, 90, 3, 324, 3, 3, 56, 3, 301, 3, 1210, 1692, 3, 25690, 431, 3, 3, 3, 91, 324, 3, 94, 3, 1844, 3, 3, 3, 1146, 163, 21822, 3, 3, 3, 56, 3, 3, 56, 3, 2197, 3, 6073, 3, 3, 29623], [3112, 130, 3, 284, 3, 5704, 3, 94, 3, 95, 661, 3, 2197, 4847]], [[3112, 130, 3, 284, 3, 5704, 3, 94, 3, 95, 661, 3, 2197, 4847], [301, 6393, 324, 3, 36461, 118, 238, 3, 93, 3, 2197, 938, 3, 3190, 75, 91, 3, 388, 3, 336, 3, 4224, 110, 21822, 3, 3, 76, 2018, 3, 215, 518, 74, 3, 3, 93, 88, 21822, 3, 3, 92, 3, 848, 535, 614, 3, 4516, 25307, 3, 353, 3, 95, 94, 3, 10032, 324, 3, 317, 21822, 3, 3, 575, 3, 4224, 3, 92, 7278, 3, 3, 47, 3, 93, 3, 3313, 3, 88, 9813, 1964, 3, 477, 3, 3, 67, 3, 317, 21822, 3, 3, 6494, 114, 21822, 3, 3, 488, 664, 2197, 2518, 21822, 3, 3, 388, 3, 95, 661, 3, 477, 3, 3, 9746, 3, 4224, 21822, 3, 3, 93, 8714, 892, 3, 2197, 12334, 3, 93, 3079, 1451, 3, 15227]], [[301, 6393, 324, 3, 36461, 118, 238, 3, 93, 3, 2197, 938, 3, 3190, 75, 91, 3, 388, 3, 336, 3, 4224, 110, 21822, 3, 3, 76, 2018, 3, 215, 518, 74, 3, 3, 93, 88, 21822, 3, 3, 92, 3, 848, 535, 614, 3, 4516, 25307, 3, 353, 3, 95, 94, 3, 10032, 324, 3, 317, 21822, 3, 3, 575, 3, 4224, 3, 92, 7278, 3, 3, 47, 3, 93, 3, 3313, 3, 88, 9813, 1964, 3, 477, 3, 3, 67, 3, 317, 21822, 3, 3, 6494, 114, 21822, 3, 3, 488, 664, 2197, 2518, 21822, 3, 3, 388, 3, 95, 661, 3, 477, 3, 3, 9746, 3, 4224, 21822, 3, 3, 93, 8714, 892, 3, 2197, 12334, 3, 93, 3079, 1451, 3, 15227], [1569, 359, 3, 404, 3, 3, 3, 7850, 219, 3, 92, 3, 3313, 3, 2854, 81, 3, 3412, 1459, 567, 3, 3412, 1459, 567, 3, 8142, 3, 3412, 1459, 567, 3, 4516, 25307, 98, 21822, 3, 3, 92, 3, 19638, 3, 3968, 176, 3, 2197, 3, 93, 3120, 95, 207, 3, 3, 238, 3, 239, 1881, 132, 3, 10577, 3, 299, 308, 3350, 360, 3, 75, 3, 95, 661, 3, 353, 3, 1068, 7183, 200, 3, 4761, 3, 22443, 3, 11031, 3, 3, 490, 21822, 3, 3, 169, 5252, 4082, 21822, 3, 3, 269, 3, 477, 149, 21822, 3, 3, 227, 3190, 3, 26605, 110, 21822, 3, 3, 383, 98, 21822, 3, 21822, 3, 3, 2197, 3, 2197, 3, 301, 1210, 85, 21822, 3, 3, 1611, 3, 3, 3, 95, 3, 3, 215, 21822, 3, 3, 92, 4655, 3, 47, 3, 575, 3, 12992, 3, 353, 16035, 95, 3, 2923, 21530, 3, 17248, 8268, 3, 47, 24072, 3, 4224, 86, 3, 2197, 3, 10577, 3, 45820, 67, 3, 17248, 5766, 3, 570, 21822, 3, 8325, 21822, 3, 3, 75, 3, 11517, 6412, 21822, 3, 3, 169, 5252, 4082, 21822, 3, 3, 47, 3, 1145, 3, 3, 131, 3322, 3, 1709, 3, 16581, 544, 3, 1328, 683, 3, 388, 3, 1210, 85, 21822, 3, 3, 22443, 3, 27921, 173, 21822, 3, 3, 301, 94, 183, 3, 3, 3, 301, 388, 215, 3, 3, 56, 3, 317, 21822, 3, 3, 21592, 3, 477, 3, 3, 9746, 3, 4224, 163, 3, 67, 3, 46, 2728, 943, 3, 43750, 3, 29317, 5571, 3, 4224, 163, 3, 47, 3482, 84, 3, 2777, 3, 88, 21822, 3, 380, 3, 7850, 4049, 3, 353, 14227, 78, 3, 3, 4224, 2139, 88, 3, 3, 93, 3, 92, 3, 3968, 176, 3, 1709, 3, 227, 3, 3, 3, 488, 664, 19586, 28602, 3, 169, 210, 3, 15390, 223, 21822, 3, 1326, 98, 21822, 3, 206, 3, 673, 3, 1146, 173, 3, 3, 5379, 3, 591, 3, 258, 84, 3, 138, 546, 3, 3, 238, 3, 383, 679, 90, 21822, 3, 3, 269, 3, 1617, 3, 3, 258, 3, 4224, 3, 92, 3, 1642, 3, 2774, 261, 3, 324, 3, 3, 56, 324, 3, 4761, 3, 22443, 3, 301, 5777, 67]], [[1569, 359, 3, 404, 3, 3, 3, 7850, 219, 3, 92, 3, 3313, 3, 2854, 81, 3, 3412, 1459, 567, 3, 3412, 1459, 567, 3, 8142, 3, 3412, 1459, 567, 3, 4516, 25307, 98, 21822, 3, 3, 92, 3, 19638, 3, 3968, 176, 3, 2197, 3, 93, 3120, 95, 207, 3, 3, 238, 3, 239, 1881, 132, 3, 10577, 3, 299, 308, 3350, 360, 3, 75, 3, 95, 661, 3, 353, 3, 1068, 7183, 200, 3, 4761, 3, 22443, 3, 11031, 3, 3, 490, 21822, 3, 3, 169, 5252, 4082, 21822, 3, 3, 269, 3, 477, 149, 21822, 3, 3, 227, 3190, 3, 26605, 110, 21822, 3, 3, 383, 98, 21822, 3, 21822, 3, 3, 2197, 3, 2197, 3, 301, 1210, 85, 21822, 3, 3, 1611, 3, 3, 3, 95, 3, 3, 215, 21822, 3, 3, 92, 4655, 3, 47, 3, 575, 3, 12992, 3, 353, 16035, 95, 3, 2923, 21530, 3, 17248, 8268, 3, 47, 24072, 3, 4224, 86, 3, 2197, 3, 10577, 3, 45820, 67, 3, 17248, 5766, 3, 570, 21822, 3, 8325, 21822, 3, 3, 75, 3, 11517, 6412, 21822, 3, 3, 169, 5252, 4082, 21822, 3, 3, 47, 3, 1145, 3, 3, 131, 3322, 3, 1709, 3, 16581, 544, 3, 1328, 683, 3, 388, 3, 1210, 85, 21822, 3, 3, 22443, 3, 27921, 173, 21822, 3, 3, 301, 94, 183, 3, 3, 3, 301, 388, 215, 3, 3, 56, 3, 317, 21822, 3, 3, 21592, 3, 477, 3, 3, 9746, 3, 4224, 163, 3, 67, 3, 46, 2728, 943, 3, 43750, 3, 29317, 5571, 3, 4224, 163, 3, 47, 3482, 84, 3, 2777, 3, 88, 21822, 3, 380, 3, 7850, 4049, 3, 353, 14227, 78, 3, 3, 4224, 2139, 88, 3, 3, 93, 3, 92, 3, 3968, 176, 3, 1709, 3, 227, 3, 3, 3, 488, 664, 19586, 28602, 3, 169, 210, 3, 15390, 223, 21822, 3, 1326, 98, 21822, 3, 206, 3, 673, 3, 1146, 173, 3, 3, 5379, 3, 591, 3, 258, 84, 3, 138, 546, 3, 3, 238, 3, 383, 679, 90, 21822, 3, 3, 269, 3, 1617, 3, 3, 258, 3, 4224, 3, 92, 3, 1642, 3, 2774, 261, 3, 324, 3, 3, 56, 324, 3, 4761, 3, 22443, 3, 301, 5777, 67], [477, 3, 3, 91, 3, 94, 3, 2980, 3, 67, 3, 3190, 75, 91, 3, 254, 3, 7208, 3, 67, 3, 3190, 75, 91, 3, 94, 3, 94, 3, 94, 3, 308, 4540, 3, 383, 43129, 252, 3, 501]], [[477, 3, 3, 91, 3, 94, 3, 2980, 3, 67, 3, 3190, 75, 91, 3, 254, 3, 7208, 3, 67, 3, 3190, 75, 91, 3, 94, 3, 94, 3, 94, 3, 308, 4540, 3, 383, 43129, 252, 3, 501], [7388, 3, 36461, 118, 238, 3, 2932, 3, 9216, 3, 3, 33118, 3, 1500, 19586, 23671, 3, 673, 3, 4592, 3, 3, 2447, 3, 353, 14227, 132, 3, 94, 3, 93, 1413, 855, 21822, 3, 3, 93, 3, 9486, 21822, 3]], []] Evaluation: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7385/7385 [37:58<00:00, 3.24it/s] {} ``` ## Expected behavior Trainer returns dictionary with statistics about model performance.
10-05-2020 12:00:46
10-05-2020 12:00:46
You did not provide any metrics to your `Trainer` and it looks like your dataset has no labels. `Trainer.evaluate` thus can't return anything useful.
transformers
7,575
closed
docs(pretrained_models): fix num parameters
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR corrects the number of parameters of pretrained BERT based models presented in the documentation. Sometimes the difference between a given model and its pairs is important. For instance: `bert-base-uncased` has **110M parameters** but `bert-base-multilingual-cased` has more than **178M parameters**, even if both models share the same architecture (12-layers, 768-hidden, 12-heads). The difference is due to the vocabulary size: `bert-base-uncased` uses a vocabulary of **30k** entries while `bert-base-multilingual-cased` uses a vocabulary of **119k** entries. To compute the number of parameters: ``` python from transformers import AutoModelForMaskedLM bert_base = AutoModelForMaskedLM.from_pretrained('bert-base-uncased') print(bert_base.num_parameters()) bert_multiling = AutoModelForMaskedLM.from_pretrained('bert-base-multilingual-cased') print(bert_multiling.num_parameters()) ``` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. Particularly: @LysandreJik and @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
10-05-2020 11:18:58
10-05-2020 11:18:58
transformers
7,574
closed
Some weights of GPT2DoubleHeadsModel were not initialized from the model checkpoint at gpt2 and are newly initialized
# ❓ Questions & Help Hi, I'm new to gpt2 and also this project! I was trying to run an example in the tutorials **https://huggingface.co/transformers/model_doc/gpt2.html** as follows: ``` import torch from transformers import GPT2Tokenizer, GPT2DoubleHeadsModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2DoubleHeadsModel.from_pretrained('gpt2', return_dict=True) # Add a [CLS] to the vocabulary (we should train it also!) num_added_tokens = tokenizer.add_special_tokens({'cls_token': '[CLS]'}) embedding_layer = model.resize_token_embeddings(len(tokenizer)) # Update the model embeddings with the new vocabulary size choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"] encoded_choices = [tokenizer.encode(s) for s in choices] cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices] input_ids = torch.tensor(encoded_choices).unsqueeze(0) # Batch size: 1, number of choices: 2 mc_token_ids = torch.tensor([cls_token_location]) # Batch size: 1 outputs = model(input_ids, mc_token_ids=mc_token_ids) lm_logits = outputs.logits mc_logits = outputs.mc_logits ``` Then I got errors as below: > Some weights of GPT2DoubleHeadsModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight', 'multiple_choice_head.summary.weight', 'multiple_choice_head.summary.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. **https://github.com/huggingface/transformers/issues/6667** I've looked it up and find somebody with the same question as me. But I still got confused about what and how I can do to " fine-tune my model on a multiple-choice task". Maybe it's a dumb question though, but I still want to know how to make it work!
10-05-2020 11:13:10
10-05-2020 11:13:10
Hello! I recommend you read [this doc](https://huggingface.co/transformers/task_summary.html) first to get an understanding of different tasks. What the warning you got means: - The model checkpoint (`"gpt2"`) was trained on a specific task (here, causal language modelling, or CLM) - You're loading that checkpoint in an architecture that has an additional head on top of it. This means there are a few more layers on top of the existing model. - The warning tells you: The base model (the GPT-2 architecture) is correctly initialized from the checkpoint. The additional head **is not**. - It cannot be initialized from that checkpoint as the multiple-choice head requires to be trained on a multiple-choice task. The CLM task mentioned earlier doesn't require this head and doesn't train it. - If you want to leverage that checkpoint with multiple-choice, it means that you should train these few layers on a multiple-choice task. Similarly to sequence classification or token classification, there are multiple different multiple-choice tasks, so you should find/create a dataset close to your use-case. You can find an example script showcasing that [here](https://github.com/huggingface/transformers/tree/master/examples/multiple-choice). I hope I helped answer your queries. Feel free to re-open if you have additional questions.
transformers
7,573
closed
[model_card] bert-base-5lang-cased
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds the model card of [amine/bert-base-5lang-cased](https://huggingface.co/amine/bert-base-5lang-cased). A smaller version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handles only 5 languages (en, fr, es, de and zh) instead of 104 while reducing its size by 30%. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
10-05-2020 10:09:05
10-05-2020 10:09:05
transformers
7,572
closed
Finetuning T5: Keyword arguments not recognized.
# ❓ Questions & Help ## Details Hi, I want to finetune the T5-small model for summarization purposes, to finetune the T5-large later. I prepared my data as as shown in the examples but during training i receive the message: `Keyword arguments {'src_lang': None, 'tgt_lang': None, 'add_prefix_space': False} not recognized.` which indicates to me, that somehow my data preparation process is wrong. However, I was not able to find out how the data for T5 has to be prepared properly (as the T5 is a multi-ability model, the data has to be marked somehow, but I dont know how). Currently my train data looks as the follow: train.source: Line 1: A long text Line 2: Another long Text train.target: Line 1: 'target':'Target for the text' Line 2: 'target':'Another target for the text' This question is in my view to specific to the transformer huggingface-architecture as I use the `finetune_sh` script in the seq2seq example folder. A small example, of how the data has to be structured for T5 will be very helpful, thank you!
10-05-2020 10:01:08
10-05-2020 10:01:08
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,571
closed
Sequence Classification One-Hot Encoded Data
## Environment info - `transformers` version: 3.3.1 ### Who can help albert, bert, GPT2, XLM: @LysandreJik ## Information I am using Bert and Roberta. The tasks I am working on is: Sequence Classification ## Problem The model does not work with "One-Hot Encoded" data. The model only accepts a list of integers as labels which is feed into the MSELoss. This is undesired in a multi-label classification task with categorical data, because an order of the classes is induced.
10-05-2020 09:49:48
10-05-2020 09:49:48
Hi, if you want to implement a custom loss, you should not pass `labels` to the model, but instead retrieve the `hidden_states` and compute them as you would with any other PyTorch model.<|||||>Thanks for your fast answer. I found out that nn.CrossEntropyLoss expects class indices and does not take one-hot encoded tensors as target labels. So there is no difference between your implementation and using one-hot encoded labels.
transformers
7,570
closed
Import error for MarianMTModel
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): MarianMTModel The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) I tried importing MarianMTmodel from transformers and it raised an error message The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) My own task ## To reproduce Steps to reproduce the behavior: 1.from transformers import MarianMTModel, MarianTokenizer 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Imports MarianMTModel correctly <!-- A clear and concise description of what you would expect to happen. --> This is the error message I got: ImportError: cannot import name 'MarianMTModel' from 'transformers' (C:\Users\PRINCE\Anaconda3\envs\Brian's Enviroment\lib\site-packages\transformers\__init__.py)
10-05-2020 09:45:11
10-05-2020 09:45:11
Hi, please fill-in the template or we won't be able to help you.<|||||>Hello, I have updated it. Thanks<|||||>This part is the most important part, please complete it: > ``` > transformers version: > Platform: > Python version: > PyTorch version (GPU?): > Tensorflow version (GPU?): > Using GPU in script?: > Using distributed or parallel set-up in script?: > ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,569
closed
Add Electra unexpected keys
This PR adds the necessary ELECTRA unexpected keys. Some keys are only used with models that have a different embedding size to their hidden size, in order to do the projection. Some models (such as the `large` ELECTRA variants), do not leverage these weights, as they have the same embedding/hidden sizes. Fixes #7530.
10-05-2020 08:42:18
10-05-2020 08:42:18
transformers
7,568
closed
[Model card] Java Code Summarizer model
Initial version of java code summarizer model for generating code comments.
10-05-2020 03:53:02
10-05-2020 03:53:02
transformers
7,567
closed
Is training distilbert with TPU supported yet?
# 🚀 Feature request Hi! I tried training my own distilbert model with [this code](https://github.com/huggingface/transformers/blob/master/examples/distillation/train.py) using GPU, and it was a success. I'm wondering if training distilbert model with TPU is supported yet, or if there's any plan to release a new version in which TPU is supported?
10-05-2020 03:11:21
10-05-2020 03:11:21
Distillation is not supported (yet) on TPU. You can check the status of different example scripts [here](https://github.com/huggingface/transformers/tree/master/examples). Those supported by `Trainer` or `TFTrainer` or `pytorch-lightning` can be run on TPU, others cannot.
transformers
7,566
closed
Trainer incorrectly checks pytorch version
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-glibc2.27 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes(not applicable) - Using distributed or parallel set-up in script?: not applicable ### Who can help @sgugger and @prajjwal1 (he added it, according to git blame) ## Information I'm running token classification example on my own data, and i've faced trouble with fp16 training with torch 1.6.0. Script says that i need apex installed to use fp16 option. However, apex is not required since torch 1.6.0 came out with native amp support. I've dived into trainer code, and found that there is version checking line: https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/src/transformers/trainer.py#L65 Apparently, it is slightly incorrect. There should be <= instead of <. So it will not try to import apex if torch version is greater OR EQUAL to 1.6 ```python import torch from packaging import version print(version.parse(torch.__version__) < version.parse("1.6")) # -> False print(version.parse(torch.__version__) <= version.parse("1.6")) # -> True ``` ## To reproduce 1. Install torch 1.6.0(and do not install apex) 2. clone repository 3. cd into examples/token-classification 4. add '--fp16' to the bottom of run.sh script 5. execute run.sh script ## Expected behavior Script works well on torch 1.6.0 without apex installed
10-04-2020 18:22:04
10-04-2020 18:22:04
I am unsure on why you think this test is wrong: if this test is true, we import APEX. So if we check with `<=`, we will then try to import APEX which is exactly what we are trying to avoid.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,565
closed
Two slow deberta test failures
https://github.com/huggingface/transformers/runs/1204063498?check_suite_focus=true ``` FAILED tests/test_modeling_deberta.py::DebertaModelIntegrationTest::test_inference_classification_head FAILED tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_torch_encode_plus_sent_to_model ``` @LysandreJik I think ?
10-04-2020 16:43:57
10-04-2020 16:43:57
transformers
7,564
closed
Update normalising method in oneshot classifier
Hi, I know I raised this issue before here: https://github.com/huggingface/transformers/pull/5760#issuecomment-673840015 but I do think that it is worth taking a second look atleast. So my main concern is, since we are dealing with logits, currently all that happens is to take the softmax over the entailment logits. This does guarantee summing to one, however, it doesn't care for the scale of the logits. For example suppose the logits for two possible classes for a given sentence came out as: ``` [[1000, 10, 10], [0.1, 0.1, 0.9]] ``` at the current method would give class1 the higher probability, even though, it can clearly be seen that class1 is a contradiction (since it is 100x contradiction in the log scale), and similarly class2 is clearly an entailment. Now I do understand that this is an extreme case, but just to cover the bases, what I propose is that: 1. We softmax across every single sentence-class pair to get everything in the same scale. 2. Get a probability measurement over entailment by simply dividing by sum of probabilities. As seen [here](https://github.com/huggingface/transformers/pull/5760#issuecomment-673840015 ) the numbers that we get are different but only slightly for the example shown. I do apologise about raising this again, just simply trying to help.
10-04-2020 11:21:19
10-04-2020 11:21:19
Pinging @joeddav <|||||>Hey @sachinruk, thanks for taking the time to contribute 🤗 I take your point and the case that you've described can certainly happen with the current method, but I maintain what I said in the other thread that there's not really a single correct way of doing this. We're highjacking the outputs of a model trained on a completely different distribution (NLI data) for own own purposes, so it's just a game of figuring out what makes intuitive sense and what empirically works the best. I ran a quick benchmark on the AG's News topic classification dataset comparing the current method with the one you've proposed, and they performed similarly. I got a weighted F1 of ~70 with the entailment-only method used in the pipeline and around 68 with the method you've proposed. If you can show that your proposed method empirically does significantly better, we could look into changing it or adding it as an additional kwarg to specify the method. But as is, I don't think it makes sense to change the behavior of the pipeline under people's feet.
transformers
7,563
closed
Error Loading Gpt-2 model after training from scratch.
``` Error(s) in loading state_dict for GPT2LMHeadModel: size mismatch for transformer.h.0.mlp.c_fc.weight: copying a param with shape torch.Size([768, 6]) from checkpoint, the shape in current model is torch.Size([768, 3072]). ``` I trained a GPT-2 model from scratch. When I tried loading the model using ``` config = GPT2Config.from_json_file('../input/hindigpt/config.json') # config.type_vocab_size=3072 model = GPT2LMHeadModel(config) model.load_state_dict(torch.load('../input/hindigpt/pytorch_model.bin')) ``` I am getting the above error. Here is the copy of my config.json file ``` { "activation_function": "gelu_new", "architectures": [ "GPT2LMHeadModel" ], "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "gradient_checkpointing": false, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "model_type": "gpt2", "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_inner": 6, "n_layer": 6, "n_positions": 512, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "vocab_size": 36021 } ``` Can anyone help me with this?
10-04-2020 04:04:54
10-04-2020 04:04:54
Hello! How did you train your GPT-2 from scratch? Which framework did you use?<|||||>> Hello! How did you train your GPT-2 from scratch? Which framework did you use? Used PyTorch framework.<|||||>Used this script ``` import os import gc import glob import torch import pickle import joblib from tqdm.auto import tqdm from pathlib import Path from tokenizers import ByteLevelBPETokenizer from transformers import GPT2Tokenizer import torch from transformers import GPT2TokenizerFast from transformers import GPT2LMHeadModel from transformers import DataCollatorForLanguageModeling from transformers import TextDataset from transformers import GPT2Config from transformers import Trainer, TrainingArguments tokenizer = GPT2Tokenizer.from_pretrained('hindi/') vocab_size = tokenizer.vocab_size print(vocab_size) print(torch.cuda.is_available()) config = GPT2Config( vocab_size=vocab_size ) tokenizer = GPT2TokenizerFast.from_pretrained("hindi/", max_len=512) model = GPT2LMHeadModel(config=config) print(model.num_parameters()) print("Now let's build our training Dataset") dataset = TextDataset( tokenizer=tokenizer, file_path="data/train.txt", block_size=132, ) print("Start Training") data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) print("Trainer Classes") training_args = TrainingArguments( output_dir="hindi/", overwrite_output_dir=True, num_train_epochs=1, per_gpu_train_batch_size=64, save_steps=10_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True, ) trainer.train() trainer.save_model("hindi/") print("done") ``` for training @LysandreJik
transformers
7,562
closed
Output global_attentions in Longformer models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #7514 [From @patrickvonplaten]: This PR introduces a new structure for the output attentions in Longformer. There are two types of attentions in Longformer: local attention outputs and global attention outputs. Previously, if global attention was used the `attentions` were set to the global attentions and the local attentions were discarded. This is suboptimal as one has no access to the local attentions in this case. The better design IMO is to have both `attentions` and `global_attentions` in Longformer (similar to `encoder_attentions`, `decoder_attentions` in Seq2Seq and `attentions`, `ngram_attentions` in ProphetNet). Also, the PR switches from tuple indexing to using `ModelOutput` kwargs in the `test_attention_output` function which IMO we should bit by bit do for all tests from now on. In PT Longformer, the `is_global_attn_index` tensor is now only calculated once instead for each layer which slightly speeds up computation. Awesome job @gui11aume! Especially for the docstring -> the description of `global_attentions` and `attentions` is impeccable. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
10-04-2020 01:44:37
10-04-2020 01:44:37
We are actually having a longer internal discussion about the general handling of different attentions - this might still take a couple of days to be decided.<|||||>> Cool, wonderful that you added so many tests! > > @patrickvonplaten, you say: > > > if global attention was used the attentions were set to the global attentions and the local attentions were discarded > > In this case, wouldn't the best model output be one where there are both `local_attentions`, `global_attentions` **as well as** `attentions` that are kept simply for backwards compatibility? The previous design led to errors as shown here: https://github.com/huggingface/transformers/issues/5646 -> so I think it's fine to break backwards compatibility here. `local_attentions` would arguably be a better name than `attentions`, but for consistency with other models and for a standard case where `global_attention_mask=None`, so that `local_attentions` == (all) `attentions`, I would prefer to keep the name `attentions` here. > > Other than that and the docstrings, LGTM! Docstrings will be corrected! <|||||>Works for me!<|||||>@gui11aume great work again - understanding longformer's attention is not straightforward and your doc string was spot-on! Looking forward to your next contribution ;-) Hope you're fine with the small changes I made<|||||>@patrickvonplaten @lalitpagaria this merge PR breaks longformer training with gradient checkpointing True please fix as i am unable to train models with latest models. error comes on this line https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_longformer.py#L1072 model expect 2 to 6 positional arguments, but 7 where giving. <|||||>> @patrickvonplaten @lalitpagaria this merge PR breaks longformer training with gradient checkpointing True > please fix as i am unable to train models with latest models. > > error comes on this line > https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_longformer.py#L1072 > > model expect 2 to 6 positional arguments, but 7 where giving. Hey @manishiitg, Thanks a lot for message! This did indeed break gradient checkpointing for longformer - sorry! This PR fixes it: https://github.com/huggingface/transformers/pull/8415
transformers
7,561
closed
Moved feature generation into getitem to save ram
# What does this PR do? I had this issue that my Colab Notebook would run out of memory when creating the feature in the SQuAD dataset, so I basically just moved it to the getitem section. I also added a function to force create the features immediatly incase somehow for some reason would want the features created in the beginning Fixes # (issue) ## Before submitting - I submitted an issue, but never got any feedback. - I did not update the documentation due to the fact that it doesn't exist - I did not write tests either since I couldn't get the tests to work out on my machine. However I did test it while training the model. However I could easily write tests if you believe it's necessary. ## Who can review? @patrickvonplaten, maybe you can take a look at the pull request?
10-03-2020 21:46:56
10-03-2020 21:46:56
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,560
closed
Remove labels from the RagModel example
# What does this PR do? As pointed out in #7554, the `RagModel` does not accept `labels`. Not sure why they are in the documentation. This PR fixes that. Fixes #7554
10-03-2020 21:00:36
10-03-2020 21:00:36
transformers
7,559
closed
Is this realy a list or a Dict[str, int]? I think the docstring might be wrong because in the model json file it is stored as a dict.
https://github.com/huggingface/transformers/blob/9bdce3a4f91c6d53873582b0210e61c92bba8fd3/src/transformers/configuration_utils.py#L117 See here for example: https://s3.amazonaws.com/models.huggingface.co/bert/oliverguhr/german-sentiment-bert/config.json
10-03-2020 19:03:04
10-03-2020 19:03:04
transformers
7,558
closed
[Model card] SinhalaBERTo model.
This is the model card for keshan/SinhalaBERTo model.
10-03-2020 18:12:09
10-03-2020 18:12:09
Thanks for sharing. If you'd like you can contribute sample inputs for Sinhala at https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts – Thanks!
transformers
7,557
closed
Enable debug with TF2 and eager execution
TF2 with eager execution and tf.gradients not supported anymore. In order to run the code with eager execution and not collapse instead of tf.gradients should be tf.GradientTape. [how-to-compute-gradient-of-output-wrt-input-in-tensorflow-2-0](https://stackoverflow.com/questions/59145221/how-to-compute-gradient-of-output-wrt-input-in-tensorflow-2-0)
10-03-2020 17:38:48
10-03-2020 17:38:48
Hello! Thanks for you PR. The reason we used `tf.gradients` is because it handles `None` gradients while `tape.gradients` don't, so for now unless we can find a better way to handle the `None` values for gradients, we will keep it like this. Also the training is forced to be done in graph compilation with `tf.function` so the eager mode is anyway disactivated.<|||||>Thanks for detailed answer. Another reason I did the change is to deal with following error that I getting after upgrading to the laters version of transformers library. `ValueError: distributed_training_steps() should not modify its Python input arguments. Check if it modifies any lists or dicts passed as arguments. Modifying a copy is allowed.` This error disappears after my change. tensorflow==2.3.1 transformers==3.3.1<|||||>Ok, can you open an issue with the details on how to reproduce the error please.<|||||>I am closing this pull request as I found workaround regarding my issue issue with **parameter modification** .
transformers
7,556
closed
Problem with automatic best model loading.
When I provide `load_best_model_at_end=True`, `metric_for_best_model='eval_f1_macro` and `greater_is_better=True` together with `save_total_limit=2` this happens: ``` Traceback (most recent call last): File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/optuna/study.py", line 778, in _run_trial result = func(trial) File "train_aws.py", line 174, in opt trainer.train() File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/transformers/trainer.py", line 810, in train and self.global_step % self.args.eval_steps == 0 ZeroDivisionError: integer division or modulo by zero Traceback (most recent call last): File "train_aws.py", line 197, in <module> study.optimize(opt) File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/optuna/study.py", line 328, in optimize func, n_trials, timeout, catch, callbacks, gc_after_trial, None File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/optuna/study.py", line 726, in _optimize_sequential self._run_trial_and_callbacks(func, catch, callbacks, gc_after_trial) File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/optuna/study.py", line 755, in _run_trial_and_callbacks trial = self._run_trial(func, catch, gc_after_trial) File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/optuna/study.py", line 778, in _run_trial result = func(trial) File "train_aws.py", line 174, in opt trainer.train() File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/transformers/trainer.py", line 810, in train and self.global_step % self.args.eval_steps == 0 ZeroDivisionError: integer division or modulo by zero ```
10-03-2020 15:24:35
10-03-2020 15:24:35
It's hard to know what's going on without seeing your code. The error indicates `args.eval_steps = 0` and this argument is not modified by the `Trainer` itself, so you should make sure you did set it to something >0.<|||||>You are right. I had `eval_steps = 0`. Thanks for the feedback!
transformers
7,555
closed
Update Code example according to deprecation of AutoModeWithLMHead
'The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.' I dont know how to change the 'How to use this model directly from the 🤗/transformers library:' part since it is not part of the model-paper # What does this PR do? Fix the future deprecation of `AutoModelWithLMHead` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @julien-c <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
10-03-2020 12:27:00
10-03-2020 12:27:00
@julien-c what is the test that failed testing? i mean, there is no test that should fail because of my minor changes...
transformers
7,554
closed
RAG: error in outputs = model(input_ids=input_ids, labels=input_dict["labels"])
I tried the following code given in the documentation. ``` from transformers import RagTokenizer, RagRetriever, RagModel import torch tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-base") retriever = RagRetriever.from_pretrained("facebook/rag-token-base", index_name="exact", use_dummy_dataset=True) # initialize with RagRetriever to do everything in one forward call model = RagModel.from_pretrained("facebook/rag-token-base", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="pt") input_ids = input_dict["input_ids"] outputs = model(input_ids=input_ids, labels=input_dict["labels"]) ``` In the last step, it gives the following error. **/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'labels'**
10-03-2020 12:08:12
10-03-2020 12:08:12
Yes, the `RagModel` just contains the bare model and has no training objective (like all HF `XxxModel`). It doesn't take a `labels` argument, the example in the documentation is wrong.
transformers
7,553
closed
[model_card] bert-base-5lang-cased
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
10-03-2020 11:35:48
10-03-2020 11:35:48
transformers
7,552
closed
Add batch inferencing support for GPT2LMHeadModel
# What does this PR do? This adds correct (absolute) positional embedding to the output, when given attention mask. The positional embedding is calculated using attention mask. Fixes #3021 Here is an example usage: ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2', return_dict=True) # when generating, we will use the logits of right-most token to predict the next token # so the padding should be on the left tokenizer.padding_side = "left" tokenizer.pad_token = tokenizer.eos_token # to avoid an error sentences = ["Hello, my dog is a little", "Hello, my dog is", # use different length sentences to test batching ] inputs = tokenizer(sentences, return_tensors="pt", padding=True) output_sequences = model.generate( input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], do_sample=False, # disable sampling to test if batching affects output ) for i in range(len(sentences)): print(tokenizer.decode(output_sequences[i])) # you can use skip_special_tokens=True in decode() to remove padding token # but note that it will also remove other special_tokens ``` outputs: ``` Hello, my dog is a little bit of a mess. I'm not sure if he's going <|endoftext|><|endoftext|>Hello, my dog is a little bit of a mess. I'm not sure if he ``` comment: * I think this should be used in `examples/text-generation/run_generation.py`, but I don't know much about other models, and it (code) would be weird if only gpt2 supports batch inferencing. albert, bert, GPT2, XLM: @LysandreJik TextGeneration: @TevenLeScao documentation: @sgugger @patrickvonplaten
10-03-2020 10:48:36
10-03-2020 10:48:36
This enables significantly faster generation. Here is a simple test I ran. | | generate 20 tokens | generate 100 tokens | |-----------------|---------------|----------------| | batch size = 1 | 45.2 s | 3min 42s | | batch size = 32 | 2.25 s (20x) | 8.36 s (26.5x) | ```python # following above code data = sentences * 128 # total 256 sentences model.cuda(); data = [' '.join([x]*10) for x in data] # make the prompt longer to be more realistic from tqdm.auto import tqdm def test(batchsize = 1, max_gen_len = 20): for i in tqdm(range(0, len(data), batchsize)): batch = data[i: i+batchsize] inputs = tokenizer(batch, return_tensors="pt", padding=True) output_sequences = model.generate( input_ids=inputs['input_ids'].to(model.device), attention_mask=inputs['attention_mask'].to(model.device), do_sample=False, # disable sampling to test if batching affects output pad_token_id=tokenizer.eos_token_id, max_length=len(inputs['input_ids'][0]) + max_gen_len, # let it generate longer ) outputs = [tokenizer.decode(x) for x in output_sequences] %time test(1, 20) %time test(32, 20) %time test(1, 100) %time test(32, 100) ``` <|||||>Hey @cccntu - this is a great addition! I very much like your appraoch here. I also checked that all GPT2 SLOW tests function correctly and added a test to make sure batch generation works as expected! With the current implementation, the user would not be able to define his own `position_ids` for generate, since they are always overwritten in the `prepare_input_ids_for_generation`, but I think this is OK because: 1) Previously, it was impossible for the user to use `position_ids` because they would have to be extended by 1 each generation step - a feature which is not implemented 2) I don't see any reason why position_ids should be different from the way it is implement in the PR right now @LysandreJik - this feature was heavily requested by the community (linked a couple of issues below) and I think this is a great way to handle GPT2 batch generation. What do you think?<|||||>Related issues: https://github.com/huggingface/transformers/issues/6742, https://github.com/huggingface/transformers/issues/4746, https://github.com/huggingface/transformers/issues/4824 <|||||>@cccntu - Great work on this PR! If this PR is merged and you want to help the community a tiny bit more, you could give a short description (similar to what you've done above) on how to do batch generation with GPT2 here: https://discuss.huggingface.co/t/batch-generation-with-gpt2/1517. Many people have been asking for this so they would be very glad to see a short forum post about it. Thanks a lot again! <|||||>Awesome, great work @cccntu ! It would be amazing if you could write a little description of how your PR works on the forum: https://discuss.huggingface.co/t/batch-generation-with-gpt2/1517 - the community would be very thankful I think :-) <|||||>@patrickvonplaten Thanks for the suggestions! I just added some description to the forum post. 😃 link to the post for future reference: https://discuss.huggingface.co/t/batch-generation-with-gpt2/1517/2<|||||>Can you please add batch inferencing for GPT2DoubleHeadsModel too?<|||||>@patrickvonplaten @cccntu I can see how batch generation is now available. I was wondering, if there's already a way to do the same but with different arguments of `max_len` & `min_length` per encoded_text in a batch in `model.generate()`. Goal here is to generate new text for a batch of encoded text with variable size.<|||||>Hi @spate141, Did you mean passing a `max_len` & `min_length` as n-element array? It would fail here: https://github.com/huggingface/transformers/blob/121dd4332b7e44932b0fbe2fa18bc9fa0131402c/src/transformers/generation_utils.py#L289 Actually, the main issue is here: https://github.com/huggingface/transformers/blob/121dd4332b7e44932b0fbe2fa18bc9fa0131402c/src/transformers/generation_utils.py#L539 We need the right-most logits not be padding, and without modifying `generation_utils.py`, we need to use left-padding, and consequently we need this PR to make sure the positional embedding is correct. You can also checkout the discussions in #3021, or the forum post: https://discuss.huggingface.co/t/batch-generation-with-gpt2/1517/3 <|||||>> Did you mean passing a `max_len` & `min_length` as n-element array? - Yes, exactly! Instead of single int values for all texts in a batch... an array of values for each text in a batch. I saw the code and I can see why it will fail. https://github.com/huggingface/transformers/issues/3021 seems informative, I'll take a look. #### Meanwhile I found this way to get what I mentioned: - Let's assume a model accepts input of `max_len = 64` and we want to generate new text for a piece of text of size 300 tokens. - Since we know what's the `max_len` is, we have make sure that we split our input text into 5 batches: `[64, 60, 58, 50, 56, 12]`. - This was done in some clever way to ensure that each text segment follows valid grammar rule and also don't go above that `max_len` limit. - For all these 6 text segments we want to generate new text with following min, max values: - min_values: `[100, 100, 100, 100, 100, 25]` - max_values: `[120, 120, 120, 120, 120, 50]` - To do that, I can just pass a global min & max values (i.e. 100, 120 respectively) to `model.generate()` along with a tokenized batch of input text segments. - input_ids_shape: `(6, 64)`, min_len: `100`, max_len: `120` - My only issue here is regarding last text segment in a batch of (6, 64) tokenized tensor. Ideally, we want new generated text of size min of 25 tokens and max of 50 tokens. Generating a new text of size 100 tokens from an input of 12 tokens will be gobbledygook. - To handle this, I can just take the last segment of generated text that belongs to our last input text; and split the text and discard everything above its ideal original min/max limit, i.e. (25, 50) OR - I can just go with doing same but I combine first 5 text segments and generate text on (5, 64) and generate text for the last one (1, 64) in two pass OR - I can just generate everything in 6 pass for each 6 text segments and pass their ideal individual min/max limits @cccntu In your 2nd comment to this pull request, you posted some impressive results on why doing batch_generation is ideal, specially let's say when you have a GPU. I'm just trying to figure out if doing the same in my case is worth the latency when I have to do some post-processing. I'll post some latency results once I have this setup ready. <|||||>**Update:** @cccntu I went with my 1st approach where I'm generating text for all texts in a single batch with global min, max values. In most cases where my last text chunk in batch is _smaller_ meaning its min/max values are smaller than rest of text chunks in a same batch; I'm just trimming tokens. Results are impressive so far. Some numbers just in case someone stumble upon this thread in future: **Fixed size text batches:** - This shows when passing list of text chunks as single batch tensor Vs passing text chunks as individual in for loop. `max_len`, `min_len` variables are kept same in both. Y-axis shows total time in seconds for model to finish generating text. - All the text chunks are of same size. ![image](https://user-images.githubusercontent.com/10580847/109713174-81d48000-7b66-11eb-94a6-d0c3e6ac77b8.png) **Variable size text batches:** - Same as above, but here I'm using variable size text chunks. - For example: `2 Long, 1 Short` means my input is 2 long size texts + 1 short size text. This is to test what happens when I'm generating text for variable size text chunks in a single batch. - Also to note that I'm trimming generated text for short text chunks in post processing. So, time on Y-axis include that. ![image](https://user-images.githubusercontent.com/10580847/109713189-87ca6100-7b66-11eb-8859-471c6929668d.png) Overall, batch text generation seems very useful(🎉) despite one has to add some overhead on top to manage some use cases. <|||||>@cccntu Thanks for your great work! I stumbled upon this thread and would like to know: 1. Would this batching mechanism works for GPT-NEO? 2. Would this batching mechanism works for pipeline inference? If so, is there any changes or considerations I need to do or know?<|||||>Thanks for the code! I wonder if now I could generate sentences in a batch withother models (BertGeneration, for instance)? Looking forward to your reply!<|||||>@cccntu Thanks for your code. By using the correct position_id in this case, we can do batch inference in pytorch model now. But when we export the gpt2 model to onnx with `GPT2OnnxConfig` ```python onnx_config = GPT2OnnxConfig(model.config) ## or using past_key_values mode # onnx_config = GPT2OnnxConfig(model.config, use_past=True) ``` Then the onnx model inputs don't contation position_id but only input_ids nand attention_masks。 So we can't do correct batch_inference with onnx model now, right? <|||||>Thank you for the code. I wonder if you have tested whether there is performance drop when using batch generation? Especially when the GPT-2 model is finetuned with right-padded data.
transformers
7,551
closed
RAG: NameError: name 'load_dataset' is not defined
I tried to load RAG according to the documentation. ` retriever = RagRetriever.from_pretrained("facebook/rag-token-base", index_name="exact", use_dummy_dataset=True) ` The above line gave the following error. **/python3.6/site-packages/transformers/retrieval_rag.py", line 220, in __init__ self.dataset = load_dataset( NameError: name 'load_dataset' is not defined**
10-03-2020 09:12:00
10-03-2020 09:12:00
I think this is a duplicate of #7536. RAG requires datasets and faiss to be installed in your environment to work properly. The fix with proper error messages is on in #7537.
transformers
7,550
closed
Problem with Finetuned GPT-2
Hi, I have written this code to finetune gpt-2 on a new corpus. ``` from transformers import ( AutoModelWithLMHead, AutoConfig, Trainer, AutoTokenizer, TextDataset, DataCollatorForLanguageModeling, TrainingArguments) def modelTrainer(text_path, output_dir, batch_size=2, conf='gpt2', cache_dir='./Cache'): config = AutoConfig.from_pretrained(conf) model = AutoModelWithLMHead.from_config(config) tokenizer = AutoTokenizer.from_pretrained(conf) data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) train_dataset = TextDataset( tokenizer=tokenizer, file_path=text_path, block_size=128, cache_dir=cache_dir ) training_args =TrainingArguments( output_dir=output_dir, num_train_epochs=1, per_device_train_batch_size=batch_size, warmup_steps=500, save_steps=500, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, prediction_loss_only=True ) trainer.train() trainer.save_model() ``` And then I use this to generate text from the finetuned model: ``` from transformers import pipeline, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('gpt2') def textGenerator(model_dir): gen = pipeline('text-generation', model=model_dir, tokenizer=tokenizer) return gen ``` Now my problem is that even with 1 epoch a training, the quality of generated text deteriorates drastically and I get some unknown tokens in the output, like: 'Hello,�\n\n,�\n”,\n and\n,,� the\n,\n\n the,\n� Alice\n\n,“\n the\n,,,“ on”, to� she the�\n'. I'm guessing there is a problem with tokenizer. Can anybody help me with this?
10-03-2020 06:49:23
10-03-2020 06:49:23
This problem was resolved with 10 epochs of training.
transformers
7,549
closed
Incorrect tokenization with tokens added using tokenizer.add_tokens()
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: macOS-10.15.6-x86_64-i386-64bit - Python version: 3.8.3 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> It's a tokenization issue, so tagging @mfuntowicz Also happens with rust tokenizers, so tagging @n1t0 ## Information Model I am using (Bert, XLNet ...): RoBERTa (but happens anywhere) The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] irrelevant to the bug * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python >>> from transformers import RobertaTokenizer >>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base') >>> tokenizer.add_tokens(['\U00030001', '\U00030002', '\U00030002\U00030001']) 3 >>> tokenizer.tokenize('\U00030002\U00030001') ['\U00030002\U00030001'] >>> tokenizer.tokenize('\U00030001\U00030002\U00030001') ## produces incorrect output. the last two tokens should've been together and should not have gotten split ['\U00030001', '\U00030002', '\U00030001'] >>> tokenizer.unique_no_split_tokens ['\U00030001', '<s>', '</s>', '<unk>', '\U00030002\U00030001', '<mask>', '<pad>', '\U00030002'] >>> tokenizer.unique_no_split_tokens.sort(key=lambda x: -len(x)) ## On sorting the unique_no_split_tokens by the lengths, this seems to get fixed. I suspect that internally the code is checking the presence of added tokens in this order? >>> tokenizer.unique_no_split_tokens ['<mask>', '<unk>', '<pad>', '</s>', '<s>', '\U00030002\U00030001', '\U00030001', '\U00030002'] >>> tokenizer.tokenize('\U00030001\U00030002\U00030001') ['\U00030001', '\U00030002\U00030001'] ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Tokenization seems to depend on the order in which tokens get added to the model. Just to show what happens, I've added some (very high valued) unicode character tokens and run the tokenization. Basically, '\U00030002', '\U00030001' got split the first time, which should not have happened since ''\U00030002\U00030001' is part of the vocabulary. On sorting the tokenizer.unique_no_split_tokens list by length, it seems to fix this issue. This makes me uneasy using add_tokens now with tokens that share overlaps. Also, the problem persists with the rust tokenizers library (RobertaTokenizerFast). But I don't want to open up another issue without first making sure that this is an issue. <!-- A clear and concise description of what you would expect to happen. -->
10-02-2020 23:32:53
10-02-2020 23:32:53
Pinging @n1t0 for advice.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,548
closed
Longformer2Roberta: global_attention_mask is never used
I was following the Longformer2Roberta tutorial https://github.com/huggingface/transformers/blob/master/model_cards/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16/README.md, it seems like 'global_attention_mask' is never used because this column is removed after https://github.com/huggingface/transformers/blob/9bdce3a4f91c6d53873582b0210e61c92bba8fd3/src/transformers/trainer.py#L301 so you have to add this column to the signature (whatever it is) https://github.com/huggingface/transformers/blob/9bdce3a4f91c6d53873582b0210e61c92bba8fd3/src/transformers/trainer.py#L324 or set `remove_unused_columns=False` in TrainingArguments @patrickvonplaten
10-02-2020 23:24:45
10-02-2020 23:24:45
This should be solved soon by the new `generate()` design: https://github.com/huggingface/transformers/pull/6949<|||||>Probably still takes ~1,2 weeks until merge<|||||>@patrickvonplaten are you sure you mentioned the correct issue? The issue about the `generate()` function was this one #7489 In the current issue I mention that `global_attention_mask` is never used during training. <|||||>You are 100% correct @alexyalunin :D - sorry my bad! Thanks for linking the correct issue!<|||||>Regarding this issue, I will add more scripts showing how Longformer2Roberta can be trained. I'll pay special attention to your issue here then :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,547
closed
Converting Tensorflow checkpoint to Pytorch not work for TF models downloaded using TFAutoModel.from_pretrained()
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I took a try of converting tf checkpoint to pytorch and it works well on the model that in the links on your [page](https://huggingface.co/transformers/converting_tensorflow_models.html) However, the conversion seems not working with models(bert, albert..) that downloaded using TFAutoModel.from_pretrained() I am wondering if I miss anything or those models are not currently supported? Thanks <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
10-02-2020 22:14:59
10-02-2020 22:14:59
Could you provide an example code that didn't work?<|||||>Providing the example for repo as below(e.g. let's try bert): `from transformers import AutoConfig, TFAutoModel, BertForPreTraining, load_tf_weights_in_bert` `model_name = "bert-base-uncased"` `config = AutoConfig.from_pretrained(model_name)` `tf_model = TFAutoModel.from_pretrained(model_name, config)#get the tf model using TFAutoModel` `tf_model.save_weights("./bert")#save tf model to ckpt` `pt_model = BertForPreTraining(config)#init pt model` `load_tf_weights_in_bert(pt_model, config, "./")#convert tf ckpt to pt model`<|||||>You should use `save_pretrained` and `from_pretrained` to do the conversion: ```py tf_model.save_pretrained("./bert") pt_model = BertForPreTraining.from_pretrained("./bert", from_tf=True) ```<|||||>Yes, the conversion above works for me. But it seems to me that load_tf_weights_in_bert() works only with limited Bert model if I want to convert tensorflow checkpoint to a pytorch model.<|||||>The `load_tf_weights_in_bert` method is meant to be used to convert BERT models from the original implementation (google-research/bert), not to do the conversion between our architectures in PyTorch <> our architectures in TensorFlow. Closing as the conversion shown worked!<|||||>Thanks! @LysandreJik
transformers
7,546
closed
[s2s] label smoothing loss should be normalized
by the number of padding tokens in a batch. Currently, if you change `--train_batch_size` or `--max_target_length`, your loss value will scale wildly, making it hard to compare runs.
10-02-2020 19:59:54
10-02-2020 19:59:54
Hi, I'm interested in taking a look at this. Could you please point out where to start?<|||||>1) On a branch try to add some logic to `label_smoothed_nll_loss` https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py#L35 such that changing `train_batch_size` doesn't wildly change loss. 2) validate (or ask for help validating) that the change does not hurt fine-tuning performance. The existing code is copied from `fairseq`, so we need to be fairly sure that the change does no harm before we merge it. cc @patil-suraj <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,545
closed
[s2s] fix lockfile and peg distillation constants
10-02-2020 19:50:50
10-02-2020 19:50:50
transformers
7,544
closed
Create Model Card For "abhilash1910/french-roberta" Model
# Model Card (Roberta MLM on French News Corpus) Model Card for [abhilash1910/french-roberta](https://huggingface.co/abhilash1910/french-roberta). Contains the model specification and important links which helped me create this . It uses the Roberta MLM on French New corpus (extracted from Leipzig).
10-02-2020 17:07:59
10-02-2020 17:07:59
Thanks for sharing!<|||||>Thank you @julien-c !
transformers
7,543
closed
Seq2SeqTrainer: missing features
These could all be separate issues, if you want to tackle 1 feel free to make a new issue to link to your PR, or not! 1. Configure lr scheduler from the command line https://github.com/huggingface/transformers/blob/master/examples/lightning_base.py#L119 2. Configure dropout, layerdrop from the command line: https://github.com/huggingface/transformers/blob/master/examples/lightning_base.py#L92 3. Logging to wandb seems much less frequent than with the PL integration, which sends train loss to wandb every step. 4. Losses printed out are different than with PL, (seem to be normalized in some way). This merits investigation. cc @patil-suraj
10-02-2020 16:25:05
10-02-2020 16:25:05
@sshleifer 2. Done #7532 3. logging can be controlled using `--logging_steps`, default is 500 4. I've also observed this, PL does something different I guess, final metrics should be same IMO
transformers
7,542
closed
Allow nested tensors in predicted logits
# What does this PR do? Allow deep-nested list or tuple of tensors in the predicted logits of a model. Also excluded the past from those logits if we have a model using past states. <!-- Remove if not applicable --> Fixes #7539
10-02-2020 16:09:06
10-02-2020 16:09:06
transformers
7,541
closed
T5: forward and generate produce different results even for greedy decoding of a single token
Tagging @sshleifer following earlier discussions. ## Environment info - `transformers` version: 3.2.0 (also 3.0.2) - Platform: Linux-4.4.0-17763-Microsoft-x86_64-with-glibc2.10 (WSL, but also on normal Ubuntu) - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: Fails independently of this - Using distributed or parallel set-up in script?: Fails independently of this ## Information I am using the T5 model to train on a seq2seq task. The model.forward() and model.generate() can differ, even for greedy decoding of a single token, since model.generate() seems to add a padding token before generating. This matters for "classification-style" tasks where we usually decode a single token (positive/negative for sentiment for instance). The problem arises when using: * [X] my own modified scripts: (give details below) Very similar to example: this code is enough: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration import torch tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") def run_comparison(input_sent): input_ids = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(input_sent)) input_ids = torch.tensor([input_ids]) target_ids = tokenizer.convert_tokens_to_ids(tokenizer.tokenize("chocolate </s>")) target_ids = torch.tensor([target_ids]) res_fwd = torch.argmax(model(input_ids=input_ids, labels=target_ids)[1], -1) res_gen = model.generate(input_ids=input_ids, max_length=2) print("Running comparison for %s" % input_sent) print("Using model.forward(): ", tokenizer.decode(res_fwd[0]), res_fwd[0]) print("Using model.generate(): ", tokenizer.decode(res_gen[0]), res_gen[0]) run_comparison("I love </s>") ``` Outputs: Running comparison for I love </s> Using model.forward(): and tensor([ 3, 11]) Using model.generate(): tensor([0, 3]) ## To reproduce Steps to reproduce the behavior: 1. Run code above. ## Expected behavior model.forward() and model.generate() give the same output for single token greedy decoding.
10-02-2020 16:04:13
10-02-2020 16:04:13
Part discrepancy here is that during `generate` we put `decoder_start_token_id` at the front of the output, tell the model to predict the next token, then append that next token to the end of the output. For `forward`, at the first position, we tell the model to predict the next token conditional on the previous token in `decoder_input_ids`, which should be `pad_token=0`, but we don't have any append step since everything is done in parallel. If you ignore the leading zero, have you found examples where `generate` and `forward` produce different outputs? <|||||>Yes, though for outputs of length >= 1, where I guess it would be expected since forward is not autoregressive whereas generate is. Playing around with a few sentences, it seems like behaviors/results are the same, iff: `model.generate(input_ids=input_ids, max_length=2)` (1 breaks because it counts padding) and discard the first padding. This sounds reasonable (albeit a bit undocumented) and I guess the leading 0 is ignored when calling decode so most users don't run into this. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Closing given discussion above,
transformers
7,540
closed
Difference between CLS hidden state and pooled_output
Hi, The first ouput of the TFBertModel is last_hidden_state. And I assume the CLS embdding is the first element of this object, so last_hidden_state [0]? But then you have also the pooled_output. In the docs it is written that this comes from a linear layer on top. 1. This comes originally from the pre training. Can I imagine that this model for pre training is something like the BertForSequenceClassification? 2. Can this pooled_output be fine_tune when you fine tuning the weights of the BERT Model? I assume that this will be fixed and just the hidden states are fine tunable?
10-02-2020 15:03:43
10-02-2020 15:03:43
Yes so BERT (the base model without any heads on top) outputs 2 things: `last_hidden_state `and `pooler_output`. First question: * `last_hidden_state` contains the hidden representations for each token in each sequence of the batch. So the size is `(batch_size, seq_len, hidden_size)`. * `pooler_output` contains a "representation" of each sequence in the batch, and is of size `(batch_size, hidden_size)`. What it basically does is take the hidden representation of the [CLS] token of each sequence in the batch (which is a vector of size `hidden_size`), and then run that through the [`BertPooler`](https://github.com/huggingface/transformers/blob/de4d7b004a24e4bb087eb46d742ea7939bc74644/src/transformers/modeling_bert.py#L498) nn.Module. This consists of a linear layer followed by a Tanh activation function. The weights of this linear layer are already pretrained on the next sentence prediction task (note that BERT is pretrained on 2 tasks: masked language modeling and next sentence prediction). I assume that the authors of the Transformers library have taken the weights from the original TF implementation, and initialized the layer with them. In theory, they would come from [`BertForPretraining`](https://github.com/huggingface/transformers/blob/de4d7b004a24e4bb087eb46d742ea7939bc74644/src/transformers/modeling_bert.py#L862) - which is BERT with the 2 pretraining heads on top. Second question: Yes you can fine-tune them, just like the hidden states, because the weights of the linear layer are updated when you perform a `loss.backward()`. BTW, please ask questions related to BERT/other models (which are not related to bugs) on the [forum](https://discuss.huggingface.co/), rather than posting them here.<|||||>Thank you. Am I right that the TFBertForSequenceClassification just uses the pooled output of the main BERT model and puts it in a dropout and a dense layer with just 2 neurons? Since this model works very well for my use cases I try to extract the encodings of the bert model and just need to feed them in a simple dense layer to reduce prediction time... So as I understand you, this pooling output stems from a classification head during pretraining? That is my confusiuon. Because I thought for such thing you use bert model for classficiation task with a head on top. So if I would rebuil this sitautaion starting from just the bert model how would I intizialize my "own" BertPooler with the pretrained weights? So feedining pooled output to a dense layer with some pretrained weights like the TFBertForSequenceClassification model. Why is actually the cls token used when it is not so good for tasks? I would like to use other poolings or taking the average? But I think this you can do with the output of the hidden sequences. Maybe I want to feed the averaged pooled hidden sequence to the BertPooler too? <|||||>Yes, looking at the [source code](https://github.com/huggingface/transformers/blob/aba4e22944f0c985bebdcde51d47a565dd4f551d/src/transformers/modeling_tf_bert.py#L1080) of `TFBertForSequenceClassification`, they define a dropout layer, followed by a linear layer that outputs a vector of size `config.num_labels`. In the [forward pass](https://github.com/huggingface/transformers/blob/aba4e22944f0c985bebdcde51d47a565dd4f551d/src/transformers/modeling_tf_bert.py#L1141), they use `outputs[1]`, meaning the output of the pooler layer (whose weights were pretrained on the next sentence classification task). This pooler layer takes the final hidden representation of the [CLS] token (this is a vector of size 768), then applies a linear layer and tanh to it, and then, we can apply the dropout layer and the linear layer which we defined in the `__init__` method. What is actually a bit confusing, is that in the docs they state the following about the pooler output: "This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence." (Source: https://huggingface.co/transformers/model_doc/bert.html#bertmodel) So it's actually better to use `outputs[0]`, which are the hidden representations of all tokens, and take the average. But what actually also works well in practice is just using the hidden representation of the [CLS] token. The reason this [CLS] token is introduced is because it can be used for classification tasks. You can see the hidden representation of the [CLS] token as a representation of the whole sequence (sentence). Since `outputs[0]` is of size (batch_size, seq_len, hidden_size), and we only want the vector of the [CLS] token, we can obtain it by typing `outputs[0][:, 0, :]`. You can then apply a dropout layer, followed by a linear layer on top of that to get 2 outputs (in case you are doing binary text classification). So, in practice this is what works well: ``` class TFBertForSequenceClassification(TFBertPreTrainedModel, TFSequenceClassificationLoss): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.bert = TFBertMainLayer(config, name="bert") self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) self.classifier = tf.keras.layers.Dense( config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" ) @add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="bert-base-cased", output_type=TFSequenceClassifierOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, inputs=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, ): r""" labels (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`): Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ..., config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss), If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ return_dict = return_dict if return_dict is not None else self.bert.return_dict if isinstance(inputs, (tuple, list)): labels = inputs[9] if len(inputs) > 9 else labels if len(inputs) > 9: inputs = inputs[:9] elif isinstance(inputs, (dict, BatchEncoding)): labels = inputs.pop("labels", labels) outputs = self.bert( inputs, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) last_hidden_state = outputs[0] cls_representation = last_hidden_state[:,0,:] pooled_output = self.dropout(cls_representation, training=training) logits = self.classifier(pooled_output) loss = None if labels is None else self.compute_loss(labels, logits) if not return_dict: output = (logits,) + outputs[2:] return ((loss,) + output) if loss is not None else output return TFSequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) ``` <|||||>I find this pooled_output very confusing because it is coming from somewhere from the "deep" and it breaks somehow the symmetry of the transformers. That is the point of my initial question. Your above code modification is using the cls embedding. So, you could actually hjust still use the pooled_output, because it is actually the same? Ah, I forgot that the pooled_output is using the cls embeding too, but is fed to a tanh-layer then, right? To Averaging: You mean something like GlobalAveragePooling, right? Then you have to take care about masking, right? Because for a sequence you get up to max lenght different embeddings but the actual sequence is just half along. So for averaging it is a good idea to using masking, right? <|||||>I assume that using cls or averaging is better than pooled_output. Neverthelless I would want to try out pooled_output. So I wonder how to get pooled_output from a (finetuned) TFBertForSequenceClassification model?<|||||>I just think that it is important to use the masking of the BERT outputs for averaging? https://discuss.huggingface.co/t/bert-output-for-padding-tokens/1550/2<|||||>Yes you're right, you should only take into account those tokens which are not padding tokens if you want to average them. I'm gonna take a look and reply later! <|||||>Just for interest I opened an issue: https://github.com/huggingface/transformers/issues/8148<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,539
closed
Trainer fails to correctly tackle XLNetForSequenceClassification outputs
## Environment info - `transformers` version: 3.3.1 - Platform: Linux-4.15.0-117-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: Yes, with CUDA_VISIBLE_DEVICES=0 - Using distributed or parallel set-up in script?: No ### Who can help @sgugger, @TevenLeScao ## Information Model I am using (Bert, XLNet ...): XLNet-base-cased The problem arises when using: * the official example scripts: ```text-classification/run_glue.py``` The tasks I am working on is: * an official GLUE/SQUaD task: SST-2 It seems that XLNetForSequenceClassification has different result outputs compared with other models, which makes the trainer fail to correctly tackle them. ## To reproduce Steps to reproduce the behavior: 1. Install ```transformers``` from master and download SST-2 data using ```download_glue_data.py``` 2. Create the following script ```bash GLUE_DIR=~/glue CUDA_VISIBLE_DEVICES=0 TASK_NAME=SST-2 python3 ~/applications/transformers/examples/text-classification/run_glue.py \ --model_name_or_path ~/xlnet \ --task_name $TASK_NAME \ --do_eval \ --data_dir $GLUE_DIR/$TASK_NAME \ --max_seq_length 64 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir ~/result/$TASK_NAME/ ``` 3. Run this script to make predictions ## Expected behavior Trainer should return the correct evaluation results like other models. ## Observed behavior ```bash 10/02/2020 22:33:53 - INFO - filelock - Lock 140365777899232 acquired on /data/home/liusishun/glue/SST-2/cached_dev_XLNetTokenizer_64_sst-2.lock 10/02/2020 22:33:53 - INFO - filelock - Lock 140365777899232 released on /data/home/liusishun/glue/SST-2/cached_dev_XLNetTokenizer_64_sst-2.lock 10/02/2020 22:33:56 - INFO - __main__ - *** Evaluate *** Evaluation: 0%| | 0/109 [00:00<?, ?it/s] Traceback (most recent call last): File "/data/home/liusishun/applications/transformers/examples/text-classification/run_glue.py", line 247, in <module> main() File "/data/home/liusishun/applications/transformers/examples/text-classification/run_glue.py", line 197, in main eval_result = trainer.evaluate(eval_dataset=eval_dataset) File "/data/home/liusishun/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1296, in evaluate output = self.prediction_loop(eval_dataloader, description="Evaluation") File "/data/home/liusishun/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1376, in prediction_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only) File "/data/home/liusishun/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1473, in prediction_step logits = tuple(logit.detach() for logit in logits) File "/data/home/liusishun/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1473, in <genexpr> logits = tuple(logit.detach() for logit in logits) AttributeError: 'tuple' object has no attribute 'detach' ```
10-02-2020 14:44:16
10-02-2020 14:44:16
Indeed, thanks for flagging this issue! The PR mentioned above should fix it.<|||||>FYI for anyone else having this issue, I had the same issue using `Trainer.evaluate()` on a `T5ForConditionalGeneration` model. The latest `transformers` version `3.3.1` was [released](https://github.com/huggingface/transformers/compare/v3.3.1...master) a few days before the [fix PR](https://github.com/huggingface/transformers/pull/7542), so looking forward to the next version with the fix :) thank you!
transformers
7,538
closed
T5 supervised denoising task
# 🚀 Feature request Hi everyone! I'm experimenting with T5 and I would like to fine-tune a specific pre-trained model of mine for tackling the 'filling the mask' task. To be clear, I have the following: I love \<mask> and Mario. Where \<mask> can be a single token or span. At the moment I framed the problem in this way: - input: I love <extra_id_0> and Mario. - output/label: luca The task I want to tackle is different from the canonical unsupervised one where I was able to perform it correctly. Do you think that the discussed framing presented above is enough? From the result that I got, it doesn't seem so.
10-02-2020 14:42:10
10-02-2020 14:42:10
@patrickvonplaten Any news?<|||||>Yeah this looks good to me: input_ids: I love <extra_id_0> and Mario. decoder_input_ids: `decoder_start_token_id` output/label: luca <EOS><|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,537
closed
Allow soft dependencies in the namespace with ImportErrors at use
# What does this PR do? This PR aims at making errors due to soft dependency (like datasets) easier to understand for users. It aims at making everything available in the namespace and raising an import error at `init` or `from_pretained` <!-- Remove if not applicable --> Fixes #7536
10-02-2020 13:27:47
10-02-2020 13:27:47
> FAISS_IMPORT_ERROR here ;) Good catch ;-)<|||||>(we should have this for TF/PyTorch as well)
transformers
7,536
closed
RAG model card code not working in Colab
## Environment info - `transformers` version: 3.3.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @julien-c @VictorSanh ## Information Model I am using RAG The problem arises when using: * [X] the official example scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Open a new colab notebook 2. !pip install transformers 3. execute the RAG model example code ```from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("who holds the record in 100m freestyle", return_tensors="pt") generated = model.generate(input_ids=input_dict["input_ids"]) print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0]) ``` error traceback ``` NameError Traceback (most recent call last) <ipython-input-5-fcc46db034ee> in <module>() 2 3 tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") ----> 4 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) 5 model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) 6 2 frames /usr/local/lib/python3.6/dist-packages/transformers/retrieval_rag.py in __init__(self, dataset_name, dataset_split, index_name, vector_size, index_path, use_dummy_dataset) 218 219 logger.info("Loading passages from {}".format(self.dataset_name)) --> 220 self.dataset = load_dataset( 221 self.dataset_name, with_index=False, split=self.dataset_split, dummy=self.use_dummy_dataset 222 ) NameError: name 'load_dataset' is not defined ``` ## Expected behavior I would expect the model card example to be output: # should give michael phelps => sounds reasonable or something to this effect.
10-02-2020 11:53:05
10-02-2020 11:53:05
You need to install `datasets` too for this model: ``` ! pip install datasets ``` I'll work on some cleaner error messages.<|||||>Thanks for responding @sgugger ! Sadly that didn't fix it: !pip install transformers !pip install datasets ``` NameError Traceback (most recent call last) <ipython-input-2-fcc46db034ee> in <module>() 2 3 tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") ----> 4 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) 5 model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) 6 2 frames /usr/local/lib/python3.6/dist-packages/transformers/retrieval_rag.py in __init__(self, dataset_name, dataset_split, index_name, vector_size, index_path, use_dummy_dataset) 218 219 logger.info("Loading passages from {}".format(self.dataset_name)) --> 220 self.dataset = load_dataset( 221 self.dataset_name, with_index=False, split=self.dataset_split, dummy=self.use_dummy_dataset 222 ) NameError: name 'load_dataset' is not defined ``` Still resulted in the same error.<|||||>Did you try restarting the colab? `datasets` requires that if I'm not mistaken.<|||||>Problem persists after restarting. I tried it locally as well and I get the same error, but with a more verbose message. ``` NameError Traceback (most recent call last) <ipython-input-4-e0fde23b2cd7> in <module> 2 3 tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") ----> 4 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) 5 model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) 6 ~/Library/Caches/pypoetry/virtualenvs/bbc-transformer-vt1pdFaV-py3.8/lib/python3.8/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs) 306 question_encoder_tokenizer = rag_tokenizer.question_encoder 307 generator_tokenizer = rag_tokenizer.generator --> 308 return cls( 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer 310 ) ~/Library/Caches/pypoetry/virtualenvs/bbc-transformer-vt1pdFaV-py3.8/lib/python3.8/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer) 281 ) 282 if config.index_name == "legacy" --> 283 else HFIndex( 284 config.dataset, 285 config.dataset_split, ~/Library/Caches/pypoetry/virtualenvs/bbc-transformer-vt1pdFaV-py3.8/lib/python3.8/site-packages/transformers/retrieval_rag.py in __init__(self, dataset_name, dataset_split, index_name, vector_size, index_path, use_dummy_dataset) 218 219 logger.info("Loading passages from {}".format(self.dataset_name)) --> 220 self.dataset = load_dataset( 221 self.dataset_name, with_index=False, split=self.dataset_split, dummy=self.use_dummy_dataset 222 ) NameError: name 'load_dataset' is not defined ```<|||||>I think I found why while fixing the error message. This also needs the faiss library: `! pip install faiss`.<|||||>@sgugger Confirmed. ``` !pip install transformers !pip install datasets !pip install faiss ``` give the expected behaviour. Thank you!<|||||>Working on having some clear error message for the next users in #7537 :-) Thanks for flagging the problem!<|||||>Thanks for the help! :) <|||||>@sgugger, I have the following versions of the packages installed : transformers==3.3.1 datasets==1.1.2 faiss==1.5.3 I still see the error. It would be great if you could document which versions of faiss, datasets, and transformers works !<|||||>I imported datasets to see if it helps. Didn't.<|||||>on Google Colab, switch to a GPU runtime, then try with: `!pip install faiss-gpu` finally restart the runtime. It worked for me :)<|||||>I have the same question, which is `ImportError Traceback (most recent call last) [<ipython-input-7-d8ba1013a0e5>](https://localhost:8080/#) in <module>() 6 7 tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") ----> 8 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) 9 model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) 10 1 frames [/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py](https://localhost:8080/#) in requires_backends(obj, backends) 846 failed = [msg.format(name) for available, msg in checks if not available()] 847 if failed: --> 848 raise ImportError("".join(failed)) 849 850 ImportError: RagRetriever requires the 🤗 Datasets library but it was not found in your environment. You can install it with: ``` pip install datasets ``` In a notebook or a colab, you can install it by executing a cell with ``` !pip install datasets ``` then restarting your kernel. Note that if you have a local folder named `datasets` or a local python file named `datasets.py` in your current working directory, python may try to import this instead of the 🤗 Datasets library. You should rename this folder or that python file if that's the case. RagRetriever requires the faiss library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/facebookresearch/faiss/blob/master/INSTALL.md and follow the ones that match your environment. --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. ---------------------------------------------------------------------------` but either I add !pip install faiss-gpu or !pip install faiss is not useful.<|||||>> @sgugger Confirmed. > > ``` > !pip install transformers > !pip install datasets > !pip install faiss > ``` > > give the expected behaviour. Thank you! I used above commands and it shows faiss is imported but still gives the import error for faiss. ImportError Traceback (most recent call last) [<ipython-input-5-c04f039ae844>](https://localhost:8080/#) in <cell line: 4>() 2 3 tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") ----> 4 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) 5 model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) 6 1 frames [/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in requires_backends(obj, backends) 1012 failed = [msg.format(name) for available, msg in checks if not available()] 1013 if failed: -> 1014 raise ImportError("".join(failed)) 1015 1016 ImportError: RagRetriever requires the faiss library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/facebookresearch/faiss/blob/master/INSTALL.md and follow the ones that match your environment. Please note that you may need to restart your runtime after installation. <img width="1002" alt="image" src="https://github.com/huggingface/transformers/assets/75541422/2d1d974e-e292-4b5d-acbc-ec2ae644ef3f">
transformers
7,535
closed
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' when running run_tf_text_classification.py
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (installed from master) - Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @jplu ## Information Model I am using (Bert, XLNet ...): Bert (Portuguese version: neuralmind/bert-base-portuguese-cased) The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give name) * [x] my own task or dataset: (give details below) I'm testing my own text classification dataset, and for this I was trying to use this new script `run_tf_text_classification.py` script from transformers' examples. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample: ``` text,label "Registra-se a presença do acadêmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausência injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessão dos benefícios da Justiça Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiência encerrada às 8h42min . <REL_SEP> <name> <REL_SEP> Juíza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretário de Audiência .",NO_RELATION ``` However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section. ## To reproduce Steps to reproduce the behavior: 1. Created a new conda environment using conda env -n transformers python=3.7 2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt 3. Installed tensorflow with `pip install tensorflow` 3. Ran `run_tf_text_classification.py` with the following parameters: ``` --train_file <DATASET_PATH>/train.csv \ --dev_file <DATASET_PATH>/dev.csv \ --test_file <DATASET_PATH>/test.csv \ --label_column_id 1 \ --model_name_or_path neuralmind/bert-base-portuguese-cased \ --output_dir <OUTPUT_PATH> \ --num_train_epochs 4 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --do_train \ --do_eval \ --do_predict \ --logging_steps 1000 \ --evaluate_during_training \ --save_steps 1000 \ --overwrite_output_dir \ --overwrite_cache ``` I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Here is the stack trace: ``` 2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 /media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options) FutureWarning, 2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz 2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N 2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1 10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False 10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False) 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock Using custom data configuration default Traceback (most recent call last): File "run_tf_text_classification.py", line 283, in <module> main() File "run_tf_text_classification.py", line 222, in main max_seq_length=data_args.max_seq_length, File "run_tf_text_classification.py", line 43, in get_tfds ds = datasets.load_dataset("csv", data_files=files) File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__ **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config for key in sorted(data_files.keys()): TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' ``` ## Expected behavior Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow)
10-02-2020 10:37:13
10-02-2020 10:37:13
Hello! I think your dataset might have somewhere a row that is malformed. You should check this.<|||||>Hi @jplu ! I'll check mine, but I'm able to train with it using this tutorial here, adapted to mine: [https://huggingface.co/transformers/custom_datasets.html](https://huggingface.co/transformers/custom_datasets.html) And @Santosh-Gupta uses ChemProt and had the same problem, and ChemProt is an oficial biomedical benchmark, and he downloaded the training data directly from allenai repository. Anyway, I'll try running this same piece of code from the tensorflow script using a glue benchmark.<|||||>Confirmed here...pointed to SST-2 train.csv and dev.csv and the same issue happened. [dev.txt](https://github.com/huggingface/transformers/files/5318339/dev.txt) [train.txt](https://github.com/huggingface/transformers/files/5318340/train.txt) Renamed them to .txt in order to upload here, but ran with the original names. <|||||>Can you load your dataset with: ``` import datasets files = {datasets.Split.TRAIN: "train.csv"} files[datasets.Split.VALIDATION] = "dev.csv" files[datasets.Split.TEST] = "test.csv" datasets.load_dataset("csv", data_file=files) ```<|||||>![image](https://user-images.githubusercontent.com/12713359/94936911-49d8ec00-04a5-11eb-821e-a99614e02dda.png) No @jplu , that's exactly where I'm getting the error.<|||||>Then it is an issue with the datasets package, can you post your issue there please https://github.com/huggingface/datasets<|||||>Done, thanks!
transformers
7,534
closed
The links to examples on the website don't work
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> I am looking at the examples part of the docs: https://huggingface.co/transformers/v2.2.0/examples.html The references to all the scripts on the web page don't work. for ex: [link to script](https://github.com/huggingface/transformers/blob/master/examples/run_ner.py) referenced [here](https://huggingface.co/transformers/v2.2.0/examples.html#named-entity-recognition) Thanks in advance! Here, - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
10-02-2020 10:13:52
10-02-2020 10:13:52
Hello! Indeed, you're right. This is because you're looking at an older version of the docs, and the files have since moved around. When clicking on a link, you should replace `master` with `v2.2.0` to get the correct link. For example: ``` https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py ``` should become: ``` https://github.com/huggingface/transformers/blob/v2.2.0/examples/run_lm_finetuning.py ``` Sorry for the inconvenience. I don't think there's much we can do about older versions, but we could freeze the current and future scripts to tag versions cc @sgugger.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,533
closed
Add early stopping to trainer_tf.py
## Summary This PR adds the early stopping feature to trainer_tf.py. ## Related Issues Alongside #4186 should close #4894. ## Who can review? @sgugger & @BramVanroy.
10-02-2020 10:08:07
10-02-2020 10:08:07
Thanks @KMFODA! I'm not really in favor to do this manually as there are already a Keras callbacks taking care of this https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping that can monitor multiple values on which to base his early stop. @KMFODA if you want to add the feature to add Keras callbacks handling, it will be more than welcome :) As it is part of the features we plan to add in a near future.<|||||>Seeing that I have close to no experience with TF, I won't be able to review this.<|||||>Not a problem. @jplu I agree I prefer using Keras callbacks I just only have experience using it with Keras’s model.fit function. I’ll think and experiment with how to use it in a custom built TF model. Hopefully if successful, it should be fairly simple then to add early stopping based on custom metrics rather than just validation loss.<|||||>I've just pushed the latest changes to trainer_tf.py that will use Keras's callbacks for early stopping rather than the manual solution I had initially submitted. Setting the callback in the `TFTrainer` function using a command such as this: `callbacks = [EarlyStopping(monitor='loss', patience=1, verbose=1)] ` will monitor the training loss and stop the model at the first epoch which fails to improve on the best training loss metric. Using the monitor variable we can also select metrics, other than the training loss, to carry out the early stopping on.<|||||>Hi all, anything more I can do to help get this merged?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Bump @LysandreJik @jplu <|||||>We're moving away from the TFTrainer to fully integrate with Keras, so we won't add new functionality to the TFTrainer.<|||||>> We're moving away from the TFTrainer to fully integrate with Keras, so we won't add new functionality to the TFTrainer. Alright, sounds good, This can be closed then?<|||||>I'll let the original author close it :-)
transformers
7,532
closed
[s2s] add config params like Dropout in Seq2SeqTrainingArguments
# What does this PR do? 1. Adds `config` params (`encoder_layerdrop`, `decoder_layerdrop`, `dropout`, `attention_dropout`) in `Seq2SeqTrainingArguments` 2. Fix T5 warnings (don't pass src_lang, tgt_lang args to `T5Tokenizer`) 3. Correct `vocab_size` for `FSMT`. 4. Fix `test_finetune_trainer_slow` 5. minor code cleanup in `Seq2SeqTrainer` @sshleifer
10-02-2020 08:30:44
10-02-2020 08:30:44
Yes, will add lr scheduler in separate PR <|||||>@sshleifer anything missing in this PR ?
transformers
7,531
closed
Cammembert fine tuning from checkpoint
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-28-generic-x86_64-with-debian-buster-sid - Python version: 3.7.3 - PyTorch version (GPU?): 1.3.1 (True) - Tensorflow version (GPU?): 2.0.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: no ### Who can help albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger ## Information Model I am using camembert: The problem arises when using: * [X] the official example scripts: run_language_modeling.py The tasks I am working on is: * [X ] an official GLUE/SQUaD task: LM fine tuning * [X ] my own task or dataset: working with my own data corpus ## To reproduce Steps to reproduce the behavior: 1. run the script run_language_modeling.py was first run "from scratch" starting with cammembert model with my own data. 2. launching the same script but starting from checkpoint and the same data to continue training ```bash python run_language_modeling.py --output_dir=output2 --model_name_or_path="./LM/fine_tune_cammembert/output/checkpoint-27500" --tokenizer_name="./sentencepiece.bpe.model" --do_train --train_data_file=corpus_camambert/train_1.txt --do_eval --eval_data_file=corpus_camambert/test_valid.txt --mlm --line_by_line --evaluate_during_training --overwrite_output_dir ``` ## What happens ```bash Traceback (most recent call last): File "run_language_modeling.py", line 281, in <module> main() File "run_language_modeling.py", line 186, in main tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir) File "/home/ccass/anaconda3/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 209, in from_pretrained config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) File "/home/ccass/anaconda3/lib/python3.7/site-packages/transformers/configuration_auto.py", line 272, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/ccass/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 355, in get_config_dict config_dict = cls._dict_from_json_file(resolved_config_file) File "/home/ccass/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 437, in _dict_from_json_file text = reader.read() File "/home/ccass/anaconda3/lib/python3.7/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfe in position 51: invalid start byte ``` ## Expected behavior Continue Taining
10-02-2020 07:56:38
10-02-2020 07:56:38
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hello, I still didn't solve the problem... anyone?<|||||>It seems the tokenizer you've provided cannot be loaded. You've provided `--tokenizer_name="./sentencepiece.bpe.model"` which is a path to a file, and cannot work with an AutoTokenizer. I recommend you put the tokenizer file in the same folder as your model, so that it can know the model type, and therefore the tokenizer type, from the configuration. Also, there was an issue a few months back where the tokenizer wouldn't be saved by the script, and you would have to specify it like you just did. We patched this issue since, so I invite you to use the [`run_mlm.py` script](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) instead, alongside upgrading your `transformers` version to the latest one. Thank you for your understanding.<|||||>Ok, thank you I will check it out
transformers
7,530
closed
ELECTRA - some weights are not loaded
## Environment info - `transformers` version: 3.3.1 - Platform: Linux-4.15.0-118-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help albert, bert, GPT2, XLM: @LysandreJik Model Cards: @julien-c ## Information Model I am using: ELECTRA I am getting a warning: > Some weights of the model checkpoint at google/electra-large-discriminator were not used when initializing ElectraModel: ['electra.embeddings_project.weight', 'electra.embeddings_project.bias'] > - This IS expected if you are initializing ElectraModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). > - This IS NOT expected if you are initializing ElectraModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). when using the AutoModel.from_pretrained for google/electra-base-discriminator or google/electra-large-discriminator. There is no warning for google/electra-small-discriminator. The problem remains the same when directly using the ElectraModel.from_pretrained method. ## To reproduce ``` import transformers m=transformers.AutoModel.from_pretrained("google/electra-large-discriminator") # or m=transformers.AutoModel.from_pretrained("google/electra-base-discriminator") ``` ## Expected behavior no warning
10-02-2020 06:58:41
10-02-2020 06:58:41
Hi, indeed, this can scare users even if there is no actual problem. #7569 will fix this.
transformers
7,529
closed
[GPT-2] How many columns in LM model wte layer are positional embeddings?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> Hello everyone, I have a quick question about the token weight matrix from the GPT-2 model. The transformers documentation for GPT-2 indicates that > GPT-2 is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. How many columns in the ```transformer.wte.weight``` are linked to the positional embeddings? For the GPT-2 small model, the size of embedding matrix is (50257, 768), in those 768 columns, how many of them are linked to the positional embeddings? Many thanks!
10-01-2020 23:04:47
10-01-2020 23:04:47
I would say that none are. The positions are managed by the position embedding: `transformer.wpe.weight`.
transformers
7,528
closed
QA pipeline fails with long context.
## Environment info - `transformers` version: 3.0.2n your GitHub issue and FILL OUT the two last points. - Platform: Linux-4.15.0-117-generic-x86_64-with-glibc2.10 - Python version: 3.8.2 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Same result either way - Using distributed or parallel set-up in script?: no ### Who can help @sgugger ## Information Model I am using: DistilBert via the QA pipeline. The tasks I am working on is: * my own task or dataset: ## To reproduce ``` from transformers import pipeline nlp = pipeline("question-answering") context = """ Once upon a midnight dreary, while I pondered, weak and weary, Over many a quaint and curious volume of forgotten lore— While I nodded, nearly napping, suddenly there came a tapping, As of some one gently rapping, rapping at my chamber door. “’Tis some visitor,” I muttered, “tapping at my chamber door— Only this and nothing more.” Ah, distinctly I remember it was in the bleak December; And each separate dying ember wrought its ghost upon the floor. Eagerly I wished the morrow;—vainly I had sought to borrow From my books surcease of sorrow—sorrow for the lost Lenore— For the rare and radiant maiden whom the angels name Lenore— Nameless here for evermore. And the silken, sad, uncertain rustling of each purple curtain Thrilled me—filled me with fantastic terrors never felt before; So that now, to still the beating of my heart, I stood repeating “’Tis some visitor entreating entrance at my chamber door— Some late visitor entreating entrance at my chamber door;— This it is and nothing more.” Presently my soul grew stronger; hesitating then no longer, “Sir,” said I, “or Madam, truly your forgiveness I implore; But the fact is I was napping, and so gently you came rapping, And so faintly you came tapping, tapping at my chamber door, That I scarce was sure I heard you”—here I opened wide the door;— Darkness there and nothing more. Deep into that darkness peering, long I stood there wondering, fearing, Doubting, dreaming dreams no mortal ever dared to dream before; But the silence was unbroken, and the stillness gave no token, And the only word there spoken was the whispered word, “Lenore?” This I whispered, and an echo murmured back the word, “Lenore!”— Merely this and nothing more. Back into the chamber turning, all my soul within me burning, Soon again I heard a tapping somewhat louder than before. “Surely,” said I, “surely that is something at my window lattice; Let me see, then, what thereat is, and this mystery explore— Let my heart be still a moment and this mystery explore;— ’Tis the wind and nothing more!” """ nlp(question="What is the month?", context=context) /home/brian/transformers/src/transformers/tokenization_utils_base.py:1292: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead. warnings.warn( --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-8-d64d967fe1dd> in <module> ----> 1 nlp(question="What is the month?", context=context) ~/transformers/src/transformers/pipelines.py in __call__(self, *args, **kwargs) 1636 with torch.no_grad(): 1637 # Retrieve the score for the context tokens only (removing question tokens) -> 1638 fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()} 1639 start, end = self.model(**fw_args)[:2] 1640 start, end = start.cpu().numpy(), end.cpu().numpy() ~/transformers/src/transformers/pipelines.py in <dictcomp>(.0) 1636 with torch.no_grad(): 1637 # Retrieve the score for the context tokens only (removing question tokens) -> 1638 fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()} 1639 start, end = self.model(**fw_args)[:2] 1640 start, end = start.cpu().numpy(), end.cpu().numpy() ValueError: expected sequence of length 384 at dim 1 (got 379) ``` ## Expected behavior ``` {'score': 0.9419336915016174, 'start': 401, 'end': 410, 'answer': 'December;'} ```
10-01-2020 22:07:51
10-01-2020 22:07:51
This was fixed in a more recent version.
transformers
7,527
closed
Check and update model list in index.rst automatically
# What does this PR do? We currently have two lists of the models to maintain (in the main README and in the `index.rst` for the docs) which is painful. This script checks the mode list in the index.rst is a proper converted copy of the model list in the README and can also fix it with the command `make fix-copies` (same API as for the copies of parts of the model). It also enforces our maximum character per line so that the `index.rst` is still readable in an editor.
10-01-2020 21:45:55
10-01-2020 21:45:55
I'll let you rebase!
transformers
7,526
closed
Almost Have Model Parallelism Working on GPT2 Fine-Tuning
# ❓ Questions & Help I've managed to get model parallelism working on `gpt2` for forward inference by modifying the `GPT2Model` class and adding a few lines to the `generate` method to ensure that tensors that need to be on the same device always are. It automatically distributes the blocks evenly across any number of GPUs that are detected. I had to add an additional argument to `Trainer` (`model_parallel`) to avoid conflicting distribute behavior. Unfortunately, I'm stuck on backprop, specifically in `Trainer.training_step` on the line `loss.backward()`. loss is `tensor(71.5152, device='cuda:3', grad_fn=<NllLossBackward>)` The error is: ``` RuntimeError: expected device cuda:3 but got device cuda:0 (compute_types at ..\aten\src\ATen\native\TensorIterator.cpp:246) (no backtrace available) ``` So something somewhere is on the wrong device. It would be a miracle if someone knows how to fix this, but more realistically I'm hoping for a list of things that might be wrong which I can check. Can do a code review with someone from the transformers team. This could be the pattern to enable model parallelism on all PyTorch transformers.
10-01-2020 21:29:17
10-01-2020 21:29:17
Hey @alexorona, it's great that you are working on model parallelism! Could you open a PR with the proposed changes to GPT2 and maybe post a code snippet to reproduce your error with the code in your PR? I'm happy to take a look :-) <|||||>@patrickvonplaten An update: I managed to get around the problem by carefully following every tensor in the GPT2 model and had to place the `lm_head` on the first layer because the `wte` layer is used by it. Model parallelism is now working and c confirmed with nvidia-smi: tensors are moving appropriately and well-balanced across the GPUs and models are training. It's not useful at all to create a PR right now: I'm using a version of transformers that's probably a month old and the code is barely holding together. I'd like to get the latest (and hopefully last) functional challenge solved before putting together a PR. This latest problem is extremely challenging. Only someone with a very deep knowledge of the transformers implement of the `Attention` class, `Trainer` and possibly `modeling_utils.py` can provide an intuition as to what's happening. Here's the problem: The same model on the same GPU with the same token size consumes more memory while training if there are more GPUs. For example, the first attention block will consume 2.2 GB of GPU memory on a Tesla v100 if there are 4 Tesla v100s on the instance. Meanwhile, the same block will consume 4.2 GB of GPU memory on a Tesla v100 if there are 8 Tesla v100s on the instance. It makes no sense. I believe the behavior is coming from `Attention._attn`. Does anyone know whether there's something in the implementation that would cause tensors to use up more GPU memory if more GPUs are added? Note: I've disabled all of the data parallelism in `Trainer`, which would be the obvious source. Some additional details: ``` # Running gpt-xl on 4 GPUs. Model uses 2.2 GB of memory per attention block. Block: 0 Total GPU Memory Usage: 1.40915456 Block: 1 Total GPU Memory Usage:3.604413952 Block: 2 Total GPU Memory Usage:5.803867648 Block: 3 Total GPU Memory Usage:8.003321344 Block: 4 Total GPU Memory Usage: 10.20277504 Block: 5 Total GPU Memory Usage: 12.402228736 Block: 6 Total GPU Memory Usage: 14.601682432 ``` ``` # Running gpt-xl on 8 GPUs. Model uses 4.2 GB of memory per attention block. Block: 0 Total GPU Memory Usage: 1.468251648 Block: 1 Total GPU Memory Usage: 5.847236096 Block: 2 Total GPU Memory Usage: 10.226220544 Block: 3 Total GPU Memory Usage: 14.605204992 ``` ``` class GPT2Model(GPT2PreTrainedModel): def __init__(self, config, layers_map): super().__init__(config) self.wte = nn.Embedding(config.vocab_size, config.n_embd) self.wpe = nn.Embedding(config.n_positions, config.n_embd) self.drop = nn.Dropout(config.embd_pdrop) self.h = nn.ModuleList([Block(config.n_ctx, config, scale=True) for _ in range(config.n_layer)]) # Layers map for 4 GPUs self.layers_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 1: [11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23], 2: [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36], 3: [37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]} self.wte = self.wte.to('cuda:' + str(min(self.layers_map.keys()))) self.wpe = self.wpe.to('cuda:' + str(min(self.layers_map.keys()))) self.drop = self.drop.cuda('cuda:' + str(min(self.layers_map.keys()))) def forward( self, input_ids=None, past_key_values=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, **kwargs, ): # Skipping over some details in the forward method for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)): print('Block:', i) gpu_memory = torch.cuda.memory_allocated(device = hidden_states.device)/(1e+9) print("GPU Memory:", gpu_memory) if output_hidden_states: print('output hidden shapes us true') all_hidden_states = all_hidden_states + (hidden_states.view(*output_shape),) if layer_past is not None: layer_past = layer_past.cuda(hidden_states.device) if attention_mask is not None: attention_mask = attention_mask.to(hidden_states.device) del outputs outputs = block( hidden_states, layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask[i], encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, use_cache=use_cache, output_attentions=output_attentions, ) hidden_states, present = outputs[:2] if use_cache is True: presents = presents + (present,) if output_attentions: all_attentions = all_attentions + (outputs[2],) for k,v in self.layers_map.items(): if i == v[-1] and k != max(self.layers_map.keys()): hidden_states = hidden_states.to('cuda:' + str(k + 1)) class Block(nn.Module): def __init__(self, n_ctx, config, scale=True): super().__init__() hidden_size = config.n_embd inner_dim = config.n_inner if config.n_inner is not None else 4 * hidden_size self.ln_1 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) self.attn = Attention(hidden_size, n_ctx, config, scale) self.ln_2 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) if config.add_cross_attention: self.crossattention = Attention(hidden_size, n_ctx, config, scale, is_cross_attention=True) self.ln_cross_attn = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) self.mlp = MLP(inner_dim, config) def forward( self, hidden_states, layer_past=None, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, use_cache=False, output_attentions=False, ): attn_outputs = self.attn( self.ln_1(hidden_states), layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, ) attn_output = attn_outputs[0] # output_attn: a, present, (attentions) outputs = attn_outputs[1:] # residual connection hidden_states = attn_output + hidden_states if encoder_hidden_states is not None: # add one self-attention block for cross-attention assert hasattr( self, "crossattention" ), f"If `encoder_hidden_states` are passed, {self} has to be instantiated with cross-attention layers by setting `config.add_cross_attention=True`" cross_attn_outputs = self.crossattention( self.ln_cross_attn(hidden_states), attention_mask=attention_mask, head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, output_attentions=output_attentions, ) attn_output = cross_attn_outputs[0] # residual connection hidden_states = hidden_states + attn_output outputs = outputs + cross_attn_outputs[1:] # add cross attentions if we output attention weights feed_forward_hidden_states = self.mlp(self.ln_2(hidden_states)) # residual connection hidden_states = hidden_states + feed_forward_hidden_states outputs = [hidden_states] + outputs return outputs # hidden_states, present, (cross_attentions, attentions) class Attention(nn.Module): def __init__(self, nx, n_ctx, config, scale=False, is_cross_attention=False): super().__init__() n_state = nx # in Attention: n_state=768 (nx=n_embd) # [switch nx => n_state from Block to Attention to keep identical to TF implem] assert n_state % config.n_head == 0 self.register_buffer( "bias", torch.tril(torch.ones((n_ctx, n_ctx), dtype=torch.uint8)).view(1, 1, n_ctx, n_ctx) ) self.register_buffer("masked_bias", torch.tensor(-1e4)) self.n_head = config.n_head self.split_size = n_state self.scale = scale self.is_cross_attention = is_cross_attention if self.is_cross_attention: self.c_attn = Conv1D(2 * n_state, nx) self.q_attn = Conv1D(n_state, nx) else: self.c_attn = Conv1D(3 * n_state, nx) self.c_proj = Conv1D(n_state, nx) self.attn_dropout = nn.Dropout(config.attn_pdrop) self.resid_dropout = nn.Dropout(config.resid_pdrop) self.pruned_heads = set() self.softmax = nn.Softmax(dim=-1) def prune_heads(self, heads): if len(heads) == 0: return heads, index = find_pruneable_heads_and_indices( heads, self.n_head, self.split_size // self.n_head, self.pruned_heads ) index_attn = torch.cat([index, index + self.split_size, index + (2 * self.split_size)]) # Prune conv1d layers self.c_attn = prune_conv1d_layer(self.c_attn, index_attn, dim=1) self.c_proj = prune_conv1d_layer(self.c_proj, index, dim=0) # Update hyper params self.split_size = (self.split_size // self.n_head) * (self.n_head - len(heads)) self.n_head = self.n_head - len(heads) self.pruned_heads = self.pruned_heads.union(heads) def _attn(self, q, k, v, attention_mask=None, head_mask=None, output_attentions=False): w = torch.matmul(q, k) if self.scale: w = w / (float(v.size(-1)) ** 0.5) nd, ns = w.size(-2), w.size(-1) if not self.is_cross_attention: # if only "normal" attention layer implements causal mask mask = self.bias[:, :, ns - nd : ns, :ns] mask = mask.to(w.device) self.masked_bias = self.masked_bias.to(w.device) w = torch.where(mask.bool(), w, self.masked_bias.to(w.dtype)) if attention_mask is not None: # Apply the attention mask w = w + attention_mask w = self.softmax(w) w = self.attn_dropout(w) # Mask heads if we want to if head_mask is not None: w = w * head_mask outputs = [torch.matmul(w, v)] if output_attentions: outputs.append(w) del mask, nd, ns, v, q, k, attention_mask, head_mask, output_attentions, w torch.cuda.synchronize() torch.cuda.empty_cache() torch.cuda.reset_max_memory_allocated() return outputs # Layers map for 4 x Tesla v100 GPUs layers_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 1: [11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23], 2: [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36], 3: [37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]} # Layers map for 8 x Tesla v100 GPUs layers_map = {0: [0, 1, 2, 3, 4], 1: [5, 6, 7, 8, 9, 10], 2: [11, 12, 13, 14, 15, 16], 3: [17, 18, 19, 21, 22, 23], 4: [24, 25, 26, 27, 28, 29], 5: [30, 31, 32, 33, 34, 35], 6: [36, 37, 38, 39, 40, 41], 7: [, 42, 43, 44, 45, 46, 47]} model = TransformersModel(layers_map = layers_map) ```<|||||>Got it working. The `TrainingArguments` object has data parallelism baked into it (along with a lot of other things), so my manual override of the batch size was failing. The tensor size was exploding because `TrainingArguments` was automatically adjusting the minimum batch size to be the number of tensors. Fine-tuned a gpt2-xl model with 1024 tokens with good results in just 15 minutes.<|||||>@alexorona Can you please share code of some example(s) of parallelisms you get to work (maybe through PR to repo examples)?<|||||>@patrickvonplaten @LSinev Greatly simplified the working code and refined so that the same basic approach can be used for other models as well. I took at look at T5 and 99% confident I can use the same approach to make it parallelizable. Will get a PR up this week, probably by Sunday.<|||||>[Model parallel PR](https://github.com/huggingface/transformers/pull/8696) merged to transformers.
transformers
7,525
closed
Fix post_init of some TrainingArguments
# What does this PR do? The `HFArgumentParser` doesn't actually support bools that are None and wants them `True` or `False`. Therefore, some changes I made to a few fields of `TrainingArguments` do not work when invoked on the command line. This PR fixes that.
10-01-2020 20:59:44
10-01-2020 20:59:44
transformers
7,524
closed
Training loss suddenly increases and stays the same
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I am trying to develop a language model. The code is modified version of the example given [here](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py). The problem is that at some point during the training, the training loss spikes up and stays the same. Eval loss follows this trend as well. I am attaching tensorboard plots for reference. I am using transformers=3.2.0. I tested with transformers=3.3.0 and observed the same issue. [TensorBoard.pdf](https://github.com/huggingface/transformers/files/5314434/TensorBoard.pdf)
10-01-2020 19:53:06
10-01-2020 19:53:06
Hello, you would probably have more answers if you asked this question on the forums: https://discuss.huggingface.co<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> # ❓ 问题与帮助 > ## 细节 > 我正在尝试开发语言模型。该代码是[此处](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py)给出的示例的修改版本。 > > 问题在于,在训练过程中的某个时刻,训练损失激增并保持不变。Eval loss 也遵循这一趋势。我附上张量板图以供参考。 > > 我正在使用变压器 = 3.2.0。我使用transformers=3.3.0 进行了测试并观察到了同样的问题。 > > [TensorBoard.pdf](https://github.com/huggingface/transformers/files/5314434/TensorBoard.pdf) I also encountered this problem, is it solved?<|||||>Hi, I encountered similar issue when pretraining BERT. Have you solved this problem? Could you please share some insights?
transformers
7,523
closed
Cleanup documentation for BART, Marian, MBART and Pegasus
# What does this PR do? This is a follow-up from #7345 to finishing cleaning up the documentation for all models. Nothing of importance apart from the configurations of the 4 classes (BART, Marian, MBART and Pegasus) that can't have the same docstrings with a choose your own adventure default. They all need their own docstrings, since the defaults are different for a lot of values (which also means they all need their own implementation and not subclass the same config or at least pass the arguments with the proper defaults to the superclass). I tried my best to document the actual default for each, but may missed a few fields.
10-01-2020 19:19:59
10-01-2020 19:19:59
transformers
7,522
closed
[s2s] Adafactor support for builtin trainer
10-01-2020 19:09:31
10-01-2020 19:09:31
transformers
7,521
closed
[s2s] trainer scripts: Remove --run_name, thanks sylvain!
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
10-01-2020 18:49:14
10-01-2020 18:49:14
You're welcome ;-)<|||||>cc @patil-suraj
transformers
7,520
closed
MultiGPU Trainer: each processes uses more memory than 1 GPU job
I tried to run an 8 GPU training job, and it OOM'd, so I investigated whether it could run on 1 GPU. It could! So here are two commands, the first one says that it is using 14814MiB on GPU 0. The second says it is using `15594MiB` on each. This doesn't happen in PL, which leads me to believe that `distributed_scalars` is to blame, but I am not sure. Has anyone run into this? cc @sgugger @patil-suraj ### Delta between two commands ``` - CUDA_VISIBLE_DEVICES=0 python + CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 ``` ### Command 1 ```bash CUDA_VISIBLE_DEVICES=0 python finetune_trainer.py \ --model_name_or_path student_pegasus_cnn_12_2 \ --data_dir cnn_dm \ --output_dir dpx_cnn_12_2_pl_comb_noDO --overwrite_output_dir --freeze_embeds \ --learning_rate=3e-5 \ --warmup_steps 500 --sortish_sampler \ --gradient_accumulation_steps=4 \ --per_device_train_batch_size=4 --per_device_eval_batch_size=8 --eval_beams 2 \ --num_train_epochs=5 \ --save_steps 3000 --eval_steps 3000 \ --logging_first_step \ --max_target_length 56 --val_max_target_length 142 --test_max_target_length 142 \ --do_train --do_eval --do_predict --evaluate_during_training \ --predict_with_generate --load_best_model_at_end ``` ### Command 2 ```bash CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 fine tune_trainer.py \ --model_name_or_path student_pegasus_cnn_12_2 \ --data_dir cnn_dm \ --output_dir dpx_cnn_12_2_pl_comb_noDO --overwrite_output_dir --freeze_embeds \ --learning_rate=3e-5 \ --warmup_steps 500 --sortish_sampler \ --gradient_accumulation_steps=4 \ --per_device_train_batch_size=4 --per_device_eval_batch_size=8 --eval_beams 2 \ --num_train_epochs=5 \ --save_steps 3000 --eval_steps 3000 \ --logging_first_step \ --max_target_length 56 --val_max_target_length 142 --test_max_target_length 142 \ --do_train --do_eval --do_predict --evaluate_during_training \ --predict_with_generate --load_best_model_at_end ```
10-01-2020 18:43:22
10-01-2020 18:43:22
This seems to be a duplicate of #7169, finishing something and will investigate now that I have a proper multi-GPU steup.<|||||>Yes, most likely a duplicate. One clue might be the call to `DataParallel` right before the call to `DistributedDataParallel`. <|||||>Investigation seems to lead to: this is normal to have slightly more memory use per GPU in distributed mode since PyTorch keeps two copies in the gradients in that case. See [this issue](https://github.com/pytorch/pytorch/issues/37030).<|||||>You are correct, thank you for investigating. The difference was using `--adafactor` which is saving about 3GB. I added that option to Seq2SeqTrainer. Take it if you want it :)
transformers
7,519
closed
XLNet finetuning
I am trying to fine-tune XLNet model and it used work fine but I think huggingface update some classes and I ran through this error: RuntimeError: Trying to create tensor with negative dimension -1: [-1, 768] ![xlnet](https://user-images.githubusercontent.com/55197626/94844162-d030f780-03eb-11eb-84e4-e72deaaa71ec.PNG)
10-01-2020 17:41:38
10-01-2020 17:41:38
Hello! Could you run `transformers-cli env` in your environment and paste the result here? Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,518
closed
Fix seq2seq example test
# What does this PR do? #7490 removed the log_history save since it is now saved along inside the `TrainerState`. I didn't catch it was used in the seq2seq examples (must be due to a more recent PR because the tests were passing) so this PR adapts the part that loads `log_history` in those tests to fix them.
10-01-2020 17:23:29
10-01-2020 17:23:29
transformers
7,517
closed
Overflow error: Can't convert negative value to unsigned it [RAG Model]
## Environment info - `transformers` version: 3.3.1 - Platform: Linux-4.14.186-146.268.amzn2.x86_64-x86_64-with-glibc2.10 - Python version: 3.7.9 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: distributed ### Who can help @LysandreJik @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): RAG sequence base The problem arises when using: * [x] my own modified scripts: (give details below) I am running a modified version of eval_rag.py. I am trying to experiment with the retriever's capability for document retrieval. I am running into the following error when using the evaluate_batch_retrieval in the eval_rag.py ``` File "retrieval.py", line 126, in <module> evaluate_batch_retrieval(model, questions) File "retrieval.py", line 75, in evaluate_batch_retrieval return_tensors="pt", File "/home/ec2-user/anaconda3/envs/retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 470, in __call__ retrieved_doc_embeds, doc_ids, docs = self.retrieve(question_hidden_states, n_docs) File "/home/ec2-user/anaconda3/envs/retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 426, in retrieve return retrieved_doc_embeds, doc_ids, self.index.get_doc_dicts(doc_ids) File "/home/ec2-user/anaconda3/envs/retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 246, in get_doc_dicts return [self.dataset[doc_ids[i].tolist()] for i in range(doc_ids.shape[0])] File "/home/ec2-user/anaconda3/envs/retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 246, in <listcomp> return [self.dataset[doc_ids[i].tolist()] for i in range(doc_ids.shape[0])] File "/home/ec2-user/anaconda3/envs/retriever/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1071, in __getitem__ format_kwargs=self._format_kwargs, File "/home/ec2-user/anaconda3/envs/retriever/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1026, in _getitem indices_array = pa.array([int(i) for i in indices], type=pa.uint64()) File "pyarrow/array.pxi", line 269, in pyarrow.lib.array File "pyarrow/array.pxi", line 38, in pyarrow.lib._sequence_to_array OverflowError: can't convert negative value to unsigned int ``` The tasks I am working on is: * [x] my own task or dataset: (give details below) Just experimenting with pre-trained retriever to see how well it can retrieve the documents
10-01-2020 16:57:56
10-01-2020 16:57:56
Hey @sashank06 - could you please post a full code snippet so that we can reproduce your error? Also @lhoestq - this looks like one of your favorite errors haha <|||||>It's already fixed on `datasets` master branch, I'm going to do a release soon :) <|||||>``` import ast import logging import os import sys import pandas as pd import torch from tqdm import tqdm from transformers import BartForConditionalGeneration, RagRetriever, RagSequenceForGeneration, RagTokenForGeneration from transformers import logging as transformers_logging logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) transformers_logging.set_verbosity_info() def infer_model_type(model_name_or_path): if "token" in model_name_or_path: return "rag_token" if "sequence" in model_name_or_path: return "rag_sequence" if "bart" in model_name_or_path: return "bart" return None def evaluate_batch_retrieval(rag_model, questions): def strip_title(title): if title.startswith('"'): title = title[1:] if title.endswith('"'): title = title[:-1] return title retriever_input_ids = rag_model.retriever.question_encoder_tokenizer.batch_encode_plus( questions, return_tensors="pt", padding=True, truncation=True, )["input_ids"] #.to(args.device) question_enc_outputs = rag_model.rag.question_encoder(retriever_input_ids, return_dict=True) question_enc_pool_output = question_enc_outputs.pooler_output result = rag_model.retriever( retriever_input_ids, question_enc_pool_output.cpu().detach().to(torch.float32).numpy(), prefix=rag_model.rag.generator.config.prefix, n_docs=rag_model.config.n_docs, return_tensors="pt", ) all_docs = rag_model.retriever.index.get_doc_dicts(result.doc_ids) provenance_strings = [] for docs in all_docs: provenance = [strip_title(title) for title in docs["title"]] provenance_strings.append("\t".join(provenance)) return provenance_strings model_kwargs = {} model_type = "rag" if model_type.startswith("rag"): model_class = RagTokenForGeneration if model_type == "rag_token" else RagSequenceForGeneration model_kwargs["n_docs"] = 5 #args.n_docs index_name = "hf" if index_name is not None: model_kwargs["index_name"] = index_name # if args.index_path is not None: # model_kwargs["index_path"] = args.index_path else: model_class = BartForConditionalGeneration checkpoint = "facebook/rag-sequence-base" if model_type.startswith("rag"): retriever = RagRetriever.from_pretrained(checkpoint, **model_kwargs) model = model_class.from_pretrained(checkpoint, retriever=retriever, **model_kwargs) model.retriever.init_retrieval() else: model = model_class.from_pretrained(checkpoint, **model_kwargs) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) questions = [] questions.append("where was the first super bowl held?".strip()) evaluate_batch_retrieval(model, questions)<|||||>@patrickvonplaten I have attached the code above. Do let me know if I am doing something wrong with the code as well. <|||||>@lhoestq When would the release be made? Would that fix the issue I am facing?<|||||>Tomorrow most probably. Yes this will fix your issue<|||||>The new release is out :) You can do ``` pip install --upgrade datasets ```<|||||>will test it out and let you know if that fixes my problem.
transformers
7,516
closed
huggingface transformer running on CPU behind celery/redis doens't work (but works by itself)
Hello, I am actually creating this for posterity because it took me a day to figure it out and if anybody else has this issue, hopefully this helps. I am running a Bert2Bert EncoderDecoderModel inside a docker container, running behind celery that is getting jobs through redis. This is in a production test environment on a machine w/o a GPU, so yes, it's slow, but it's not a deal breaker. Anyways-- testing and everything works great when it's by itself. However, when I put it behind celery within a task, it would load the model and then get to generate some text and just hang. I couldn't figure out what the problem was until I found this thread: https://github.com/celery/celery/issues/4113 The issue is how the CPU version of the model does forking-- the default celery configuration breaks unless you add the following to your celery config when running celery: --pool=solo Setting this fixes the concurrency issues with forking and everything works. So, it's a configuration issue. Go forth and prosper.
10-01-2020 16:57:50
10-01-2020 16:57:50
Anwer is included in the main post. Thanks.<|||||>Another way that "fixed" the problem for me was setting the number of torch threads to 1: torch.set_num_threads(1) BEFORE loading the model in the worker.<|||||>Thanks, man, it took me a day to find your comment as well. 😆
transformers
7,515
closed
[s2s] fix nltk pytest race condition with FileLock
Attempts to resolve Flaky test issue reported on slack. when `nltk.download('punkt')` is run in multiple processes, bad things happen.
10-01-2020 16:34:59
10-01-2020 16:34:59
transformers
7,514
closed
[Longformer] Output both local attentions and global attentions when `output_attentions=True` -> Good Second Issue
# 🚀 Feature request **Good Second Issue** - A more advanced issue for contributors who want to dive more into Longformer's attention mechanism. Longformer currently only outputs global attentions, which is suboptimal because users might be interested in the local attentions as well. I propose to change the "output_attention" logic as follows in longformer: `attentions` should correspond to the "local" attentions and then we'll add a new output type `global_attention` that contains the global_attentions. This is consistent with the naming of `attention_mask` and `global_attention_mask` IMO and the cleanest way to implement the feature. Implementing this feature would mean to that Longformer will require its own `ModelOutput` class => `BaseModelOutput,` => `LongformerBaseModelOutput` or `BaseModelOutputWithGlobalAttention` (prefer the first name though) `BaseModelOutputWithPooling,` => ... Also some tests will have to be adapted. This is a slightly more difficult issue, so I'm happy to help on it. One should understand the difference between local and global attention and how Longformer's attention is different to *e.g.* Bert's attention in general. For more detail check out discussion here: https://github.com/huggingface/transformers/issues/5646
10-01-2020 16:33:52
10-01-2020 16:33:52
I am working on a pull request to address this. I don't see any major challenge so far, but this made me realize how much `attentions` in Bert-like models and in Longformers are different. Why not replace `attentions` in the Longformer by `local_attentions`? This means that the interface of Longformers would become incompatible with every other Transformer, but maybe it should be? I don't think that there is a way to plug Longformer `attentions` into a code that expects Bert-like `attentions` and get meaningful results, so users always have to write a special case for Longformers if they use them. As is, the risk is that they get bogus output and won't realize it until they carefully read the doc (that is not yet written). What are your thoughts on this @patrickvonplaten?<|||||>I have made the [pull request](https://github.com/huggingface/transformers/pull/7562). I checked that the Longformer tests passed with my changes, and I added one more test to check the output of attention probabilities. Quite stupidly I made the pull request to the __master__ branch, I am sorry about this. I left it as is to avoid duplicating pull requests for now. You can reject it and I will make a cleaner pull request to a separate branch. <|||||>sorry to have been so super inactive on this issue :-/ I will find time to solve it in ~1 week :-) . This issue is related as well: https://github.com/huggingface/transformers/pull/8007/files#r514633097.<|||||>No worries, there is no hurry on my side. Anyway, the issue is a little trickier than it looks because you guys have to decide how to encode attention probabilities when they are too large to be represented by a dense matrix. Let me know if there is anything I can do to help.<|||||>Hi @patrickvonplaten. I did not use the 🤗 Transformers since our discussion in November 2020. Today I came back to it (`transformers` version: 4.4.2) and I realized that this issue is still not completely solved. I could open a new issue, but I believe that the fix is really simple so I hope we can address it here: In some models, the global attentions are computed, stored in `outputs`, but at the very last stage they are not returned. If I am not mistaken, the issue is in [modeling_longformer.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_longformer.py). At lines 1784-1789 the code is return LongformerMaskedLMOutput( loss=masked_lm_loss, logits=prediction_scores, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) but I think it should be return LongformerMaskedLMOutput( loss=masked_lm_loss, logits=prediction_scores, hidden_states=outputs.hidden_states, attentions=outputs.attentions, global_attentions=outputs.global_attentions, # <===== ) The same goes for lines 1876 and 2124 (but it is fine for lines 2029 and 2235). <|||||>This sounds correct to me! Would you mind opening a new PR? <|||||>I will do it, no problem.<|||||>I made a minimal pull request https://github.com/huggingface/transformers/pull/10906.
transformers
7,513
closed
[Attention Mask] Fix data type
# What does this PR do? Fix data type error introduced by PR #7474. My bad!
10-01-2020 16:13:40
10-01-2020 16:13:40
transformers
7,512
closed
[XLNet] attention_mask / input_mask - Why two `attention_mask` inputs?
For whatever reason XLNet accepts both an `attention_mask` and a `input_mask`. As far as I understand `attention_mask` = 1 - `input_mask`. I don't think having the same input twice (one is the inverse of the other) has any advantage. We should remove `input_mask` IMO (first depreciate, then remove). Also, `attention_mask` should be put in the 2nd positino to make model compatible with torchscript.
10-01-2020 15:34:33
10-01-2020 15:34:33
I agree! Deprecation until v4.0.0 seems like a reasonable solution,<|||||>(We'll need at least one release with it before 4.0.0 if we want to remove the deprecation at 4.0.0)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,511
closed
[Transfo-XL] Impossible to pass `attention_mask` to model
It is not possible to forward an `attention_mask` to transfo-xl because Transfo-XL's forward function does not accept an `attention_mask`. This makes it impossible to do batch generation with Transfo-XL for example. IMO, this could be implemented quite easily.
10-01-2020 15:31:21
10-01-2020 15:31:21
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,510
closed
[Reformer, Longformer, Roberta, GPT2, CTRL] attention_mask should be at second argument
Reformer, Longformer and Roberta have some models where the `attention_mask` is not at the second position. IMO, this was not done on purpose, but sloppy implementation. In order to use `torchscript` with `attention_mask`, the `forward()` args should be refactored. This is a breaking change however. Additionally, GPT2 and CTRL also don't have `attention_mask` at their 2nd position in the forward pass. One can argue that it's more intuitive to have `past_key_values` at the second position, but leaving it there would also mean that torchscript + `attention_mask` can never really be used with GPT2. I think we should re-order the position_ids here as well, even though this is a big breaking change.
10-01-2020 15:29:34
10-01-2020 15:29:34
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,509
closed
[examples/s2s] clean up finetune_trainer
This PR 1. moves the `build_compute_metrics_fn` to `utils.py` because we need to be able to import it for `hparam` search with `Seq2SeqTrainer`. Could have made it top level but since rest of the helpers are in `utils.py`, moved it there. 2. Also moves the `Seq2SeqDataCollator` to `utils` as the dataset is also there. @sshleifer
10-01-2020 15:21:35
10-01-2020 15:21:35
Also https://github.com/huggingface/transformers/blob/a42f62d34f7b2acdb7298e586adbe9f0f28864ea/examples/seq2seq/seq2seq_trainer.py#L56 we are using `model.config` here, this breaks on multi-gpu, right ? and IMO we might not need this `assert` here <|||||>``` self.pad_token_id = self.model.config.pad_token_id ``` This will also break. Maybe we could just stuff the config or (`pad_token_id`) on `data_args`? <|||||>I may start `metrics.py` and move rouge/bleu funcs and their helpers in there as well. <|||||>> ``` > self.pad_token_id = self.model.config.pad_token_id > ``` > > This will also break. > Maybe we could just stuff the config or (`pad_token_id`) on `data_args`? we could pass `config` directly to `init`<|||||>works for me! <3 config
transformers
7,508
closed
Fix Ray Tune progress_reporter kwarg
# What does this PR do? There's a small error in the Ray Tune kwarg parsing: The expected argument name is `progress_reporter`, not `reporter`. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @sgugger
10-01-2020 14:04:27
10-01-2020 14:04:27
Sorry by the way for not catching these at the same time. This should be all for now though!<|||||>Oh I remembered seeing this last week and thinking: This is wrong, I should fix it... but forgot... Thanks for following through!
transformers
7,507
closed
Report Tune metrics in final evaluation
# What does this PR do? This PR makes Ray Tune's tuning objective function report all metrics, not just the objective, in the final evaluation step. It also gets rid of the (unnecessary) return value. With these changes the training objective is fully compatible with Ray Tune's recently introduced strict metric checking. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @sgugger
10-01-2020 13:26:26
10-01-2020 13:26:26
Just to be sure, is it also backward compatible with older versions of Ray Tune?<|||||>Yes! Return values were never required, and in fact disregarded until recently. On October 1, 2020 2:38:00 PM GMT+01:00, Sylvain Gugger <[email protected]> wrote: >Just to be sure, is it also backward compatible with older versions of >Ray Tune? > >-- >You are receiving this because you authored the thread. >Reply to this email directly or view it on GitHub: >https://github.com/huggingface/transformers/pull/7507#issuecomment-702141017 <|||||>Thanks for the fix then!
transformers
7,506
closed
configuration_utils: fix handling of `id2labels`
# What does this PR do? The parameter `id2labels` of class `PretrainedConfig` is documented as `List[str]`, so enumerate() should be used rather than dict.items() in the constructor. Since a lot of code (including test code) passes `id2labels` as a dict, enumerate() is only used if it is a list indeed.
10-01-2020 13:04:56
10-01-2020 13:04:56
Hi, as you can see in [this example NER configuration file](https://s3.amazonaws.com/models.huggingface.co/bert/dslim/bert-base-NER/config.json), the `id2label` and `label2id` are actually dictionaries. Instead of doing the change you propose, changing the documentation to reflect that would be better. Thank you!<|||||>Wow, seems like someone else already fixed the documentation in the mean time.
transformers
7,505
closed
added script for fine-tuning roberta for sentiment analysis task
# What does this PR do? Added a script in community notebooks that fine-tune's Roberta for the sentiment analysis task.
10-01-2020 13:04:26
10-01-2020 13:04:26
transformers
7,504
closed
added script for fine-tuning roberta for sentiment analysis task
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
10-01-2020 13:01:42
10-01-2020 13:01:42
transformers
7,503
closed
Turning the SQuAD dataset class into an iterator to save ram and redistribute time
# 🚀 Feature request When I'm using the SquadDataset class I sometimes run out of RAM in my Colab session. It also takes a lot of time to process the data, so if we could just preprocess the data when using the __getitem__. I think we should turn the examples into an iterator rather than a list. And when using the __getitem__ we should then convert the example to a feature ## Motivation As I mentioned, this can save a lot of ram and redistribute the time, so we do the preprocessing simultaneously. When I run this snippet in Google Colab it runs out of memory even though I've 12 GB of ram ```python train_dataset = SquadDataset( args=data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) ``` <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution I can pretty much code all of it we you believe it would be a necessary feature <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
10-01-2020 12:26:14
10-01-2020 12:26:14
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.