repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
4,900
closed
Latest version of transformers available via conda-forge?
On conda forge the last version is the 2.1.1, Why is not updated to the latest? Is it in plan to update the version? We work in a limited environment and we are forced to work with conda-forge channel. Thanks
06-10-2020 10:29:56
06-10-2020 10:29:56
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,899
closed
Error using inputs_embeds argument in TFXLNetModel
While using the TFXLNetModel: `xlnet = TFXLNetModel.from_pretrained('xlnet-base-cased')` according to the docs, `input_ids` and `inputs_embeds` can be optionally used. However, when I tried using: `xlnet(inputs_embeds=embeddings, attention_mask=attn_masks)[0]` it throws: `ValueError: The first argument to Layer.call must always be passed.` which I thought is an issue with the `inputs` argument which must be a positional one: `xlnet(inputs=None, inputs_embeds=embeddings, attention_mask=attn_masks)[0]` using this gave me: `RuntimeError: Attempting to capture an EagerTensor without building a function.` And finally passing both `inputs` and `inputs_embeds` gave: `ValueError: You cannot specify both input_ids and inputs_embeds at the same time` Can someone suggest a workaround on this? P.S. the `embeddings` variable is the `last_hidden_state` from another bert which matches the config for the `inputs_embeds` shape. Note that `input_ids` parameter won't count as it gave the same error if I didn't use the `inputs` argument.
06-10-2020 10:24:54
06-10-2020 10:24:54
Hey, @patrickvonplaten I observed the same with TFBertModel. So pretty much evident that the issue must be with the parent (if at all that helps)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> > > While using the TFXLNetModel: > `xlnet = TFXLNetModel.from_pretrained('xlnet-base-cased')` > according to the docs, `input_ids` and `inputs_embeds` can be optionally used. However, when I tried using: > > `xlnet(inputs_embeds=embeddings, attention_mask=attn_masks)[0]` > it throws: `ValueError: The first argument to Layer.call must always be passed.` try something like this: xlnet({'attention_mask':attention_mask, 'token_type_ids':token_type_ids},inputs_embeds=embeddings, training=training) worked for me when getting the same error for a bert model <|||||>```python model_outputs = self.transformer( input_ids=None, # add this line inputs_embeds=dense_feature, attention_mask=attention_mask ) ```<|||||>Is this still relevant? Gently pinging @gante @Rocketknight1 here <|||||>I got around the issue by changing the code from this ``` config = MobileBertConfig() mbert = TFMobileBertModel(config) mbert(inputs={"input_ids":input_ids, "attention_mask":attention_mask}) ``` to ``` config = MobileBertConfig() mbert = TFMobileBertModel(config) mbert(input_ids=input_ids, attention_mask=attention_mask) ``` transformers version : 4.22.1
transformers
4,898
closed
update via web
06-10-2020 10:24:51
06-10-2020 10:24:51
transformers
4,897
closed
KeyError when using non-default models in Huggingface transformers pipeline
I have no problems using the default model in the sentiment analysis pipeline. ```# Allocate a pipeline for sentiment-analysis nlp = pipeline('sentiment-analysis') nlp('I am a black man.') >>>[{'label': 'NEGATIVE', 'score': 0.5723695158958435}] ``` But, when I try to customise the pipeline a little by adding a specific model. It throws a KeyError. ``` nlp = pipeline('sentiment-analysis', tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/bert-base-cased-conversational"), model = AutoModelWithLMHead.from_pretrained("DeepPavlov/bert-base-cased-conversational")) nlp('I am a black man.') >>>--------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-55-af7e46d6c6c9> in <module> 3 tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/bert-base-cased-conversational"), 4 model = AutoModelWithLMHead.from_pretrained("DeepPavlov/bert-base-cased-conversational")) ----> 5 nlp('I am a black man.') 6 7 ~/opt/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs) 721 outputs = super().__call__(*args, **kwargs) 722 scores = np.exp(outputs) / np.exp(outputs).sum(-1, keepdims=True) --> 723 return [{"label": self.model.config.id2label[item.argmax()], "score": item.max().item()} for item in scores] 724 725 ~/opt/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py in <listcomp>(.0) 721 outputs = super().__call__(*args, **kwargs) 722 scores = np.exp(outputs) / np.exp(outputs).sum(-1, keepdims=True) --> 723 return [{"label": self.model.config.id2label[item.argmax()], "score": item.max().item()} for item in scores] 724 725 KeyError: 58129 ``` # Question on SO -->https://stackoverflow.com/questions/62300836/keyerror-when-using-non-default-models-in-huggingface-transformers-pipeline
06-10-2020 10:10:43
06-10-2020 10:10:43
The "sentiment-analysis" pipeline is only compatible with text classification models, i.e. those that can be loaded without error with `AutoModelForSequenceClassification`
transformers
4,896
closed
[WIP] Add early stopping to the trainer
closes #4894
06-10-2020 09:53:33
06-10-2020 09:53:33
Well that was quick. Awesome! Let me know if you need some help (for the pytorch part) or code review.<|||||>Looks like a duplicate of https://github.com/huggingface/transformers/pull/4186<|||||>> Looks like a duplicate of #4186 You are absolutely right. Closing this in favour of https://github.com/huggingface/transformers/pull/4186
transformers
4,895
closed
How do I fine-tune hyperparameters for a model from Huggingface library
# ❓ Questions & Help ## Details Hi, I am new to Hugging-face library and want to fine tune hyper-parameters of the mBert model. I have a simple classification head on top the cls token. I am getting around 76% accuracy. Is there any read-me or notebook available for doing the same.
06-10-2020 09:39:22
06-10-2020 09:39:22
You can take a look at this blog post https://mccormickml.com/2019/07/22/BERT-fine-tuning/<|||||>Thank You
transformers
4,894
closed
🚀 Add early stopping to the trainer
# 🚀 Feature request The trainer ([pt](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py), [tf](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py)) is an easy access point for users who rather not spend too much time building their own trainer class but prefer an out-of-the-box solution. Even though `transformers` was never meant to be a fully fletched training library, it might please users to add an additional feature: early stopping. ## Motivation Early stopping ensures that the trainer does not needlessly keep training when the loss does not improve. This saves time, money, and let's not forget the trees. 😉 Performance-wise this should not lead to different results. ## Your contribution At the moment I cannot work on this, but here are my thoughts: - a training argument should be added ([pt](https://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py), [tf](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py)). This would only work when `evaluate_during_training` is enabled. - for PyTorch: at every evaluation step, an early stopper (can be a separate class even) checks if the loss has improved in the last n steps. Potentially with a minimal threshold that the loss should have improved. If not, the trainer should stop - for Tensorflow: I don't have experience with TF myself, but I assume one could use [`tf.keras.callbacks.EarlyStopping`](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping).
06-10-2020 08:24:24
06-10-2020 08:24:24
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Looking at the interest this topic has, I am bumping it to re-open it.<|||||>Hi, So when #4186 is closed, this will close as well? Or is there any more changes expected. on this issue, apart from what #4186 adds? Thanks <|||||>If I've understood things correctly, I think #4186 only addresses the Pytorch implementation of the trainer. @BramVanroy if that's the case I'm happy to work on implementing this feature in Tensorflow (trainer_tf.py).<|||||>@san7988 @KMFODA This issue should not directly be closed when that PR is merged because as @KMFODA mentions, it only seems to address PyTorch. A PR for Tensorflow is also welcome!<|||||>Thanks for clarifying @BramVanroy. Apologies I was out for the past month due to a personal issue. I'll submit a PR for Tensorflow early stopping now.<|||||>An early stopping callback has now been introduced in the PyTorch trainer by @cbrochtrup! 👏 AFAIK the implementation the TF Trainer is still under way (https://github.com/huggingface/transformers/pull/7533) so I'll keep this topic open for now.<|||||>I gather from the conversation on #7533 that this issue should now be closed; is that correct, @BramVanroy ?
transformers
4,893
closed
🐛 TPU Training broken due to recent changes
# 🐛 Bug Looks like due to changes in file_utils.py, the TPU Training has become broken. Reverting transformers to a version before https://github.com/huggingface/transformers/commit/2cfb947f59861d5d910f84eba3be57da200b5599 fixes the problem. ## Information Seems like file_utils.py is trying to reinitialize the TPU system right after being imported. This fails because xla_spawn.py has already initialized the TPU. Model I am using (Bert, XLNet ...): roberta (but doesn't matter) Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: With a setup capable of training on TPU, replicating the official language modeling example ``` /transformers/examples$ python xla_spawn.py --num_cores 8 language-modeling/run_language_modeling.py --output_dir=output --model_type=roberta --model_name_or_path=roberta-base --do_train --train_data_file=$TRAIN_FILE --do_eval --eval_data_file=$TEST_FILE --mlm ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> The failure stacktrace- ``` File "/home/saurabh/chat-ai/vendor/transformers/examples/language-modeling/run_language_modeling.py", line 29, in <module> self = reduction.pickle.load(from_parent) from transformers import ( File "/home/saurabh/chat-ai/vendor/transformers/examples/language-modeling/run_language_modeling.py", line 29, in <module> File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/__init__.py", line 23, in <module> from transformers import ( File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/__init__.py", line 23, in <module> from transformers import ( from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/__init__.py", line 23, in <module> File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/configuration_albert.py", line 18, in <modul e> from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/configuration_albert.py", line 18, in <modul e> from .configuration_utils import PretrainedConfig File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/configuration_utils.py", line 25, in <module > from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/configuration_albert.py", line 18, in <modul e> from .configuration_utils import PretrainedConfig File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/configuration_utils.py", line 25, in <module > from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/file_utils.py", line 76, in <module> from .configuration_utils import PretrainedConfig from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/configuration_utils.py", line 25, in <module > File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/file_utils.py", line 76, in <module> tpu_device = xm.xla_device() from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 146, in xla_device File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/file_utils.py", line 76, in <module> tpu_device = xm.xla_device() File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 146, in xla_device tpu_device = xm.xla_device() devkind=[devkind] if devkind is not None else None) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 146, in xla_device File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 50, in get_xla_support ed_devices devkind=[devkind] if devkind is not None else None) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 50, in get_xla_support ed_devices xla_devices = torch_xla._XLAC._xla_get_devices() devkind=[devkind] if devkind is not None else None) RuntimeError: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:1245 : Check failed: session.Run({tensorflow::Output (result, 0)}, &outputs) == ::tensorflow::Status::OK() (Already exists: From /job:tpu_worker/replica:0/task:0: 2 root error(s) found. (0) Already exists: Resource localhost/tpu_mesh_common_state/N10tensorflow3tpu21TpuMeshStateInterfaceE [[{{node configure_distributed_tpu/_0}}]] (1) Already exists: Resource localhost/tpu_mesh_common_state/N10tensorflow3tpu21TpuMeshStateInterfaceE [[{{node configure_distributed_tpu/_0}}]] 0 successful operations. 0 derived errors ignored. vs. OK) ``` ## Expected behavior Model trains ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 (master) - Platform: Linux-4.9.0-12-amd64-x86_64-with-debian-9.12 - Python version: 3.6.10 - PyTorch version (GPU?): 1.6.0a0+af05158 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: yes, 8 way parallelism with xla_spawn.py
06-10-2020 07:47:12
06-10-2020 07:47:12
See https://github.com/huggingface/transformers/issues/4814 as well. Over there the TPU evaluation is broken. To make the TPU pipeline reliable, a end-to-end test could really help.<|||||>Hi! Thank you for raising this issue, I'll take a look. Of course, having an end-to-end test would really help. Unfortunately, such suites don't exist with TPU right now.<|||||>Thank you for fixing this so quickly!
transformers
4,892
closed
Training RoBerta using transformers on masked language task giving weird results
# ❓ Questions & Help ## Details I trained a RoBERTa model following this colab - https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=XaFAsB_fnU3K Here is how my data looked: ``` Merkel bemoans lack of rain as Germany fears for its forests .\n Germany’s forests, covering a third of its territory and as much a part of its cultural landscape as its physical one, are in danger.\n An aerial view shows a forest near Gummersbach, Germany, April 24, 2020, following an unusually warm, dry winter after a summer of record temperatures leaving forests dried out.\n Picture taken with a drone.\n The last two exceptionally hot and dry summers have weakened millions of trees, undermining their defences against the bark beetle, which can be fatal to ancient woodlands.\n And after an exceptionally dry April, with summer still two months away, a forest fire has already had to be put out near the town of Gummersbach in western Germany this week.\n “We’re already noticing these days that it’s not raining enough in many areas. ``` After training the model I used pipeline from the transforms library for the fill_mask task ``` from transformers import pipeline fill_mask = pipeline( "fill-mask", model="./output", tokenizer="./output" fill_mask("Merkel bemoans lack of rain as <mask> fears for its forests") ) ``` These are the results: ``` [{'sequence': '<s> Merkel bemoans lack of rain as. fears for its forests</s>', 'score': 0.040456026792526245, 'token': 18}, {'sequence': '<s> Merkel bemoans lack of rain as, fears for its forests</s>', 'score': 0.03502459451556206, 'token': 16}, {'sequence': '<s> Merkel bemoans lack of rain as the fears for its forests</s>', 'score': 0.03497963398694992, 'token': 269}, {'sequence': '<s> Merkel bemoans lack of rain as\n fears for its forests</s>', 'score': 0.03180328756570816, 'token': 203}, {'sequence': '<s> Merkel bemoans lack of rain as to fears for its forests</s>', 'score': 0.020796578377485275, 'token': 288}] ``` As you can see there is no meaningful word(s) returned only punctuations and one other word (to) which doesn't make sense. What am i doing wrong here? Do I have to remove all punctuations? <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**: https://stackoverflow.com/questions/62276011/training-roberta-using-transformers-on-masked-language-task-giving-weird-results
06-10-2020 07:11:02
06-10-2020 07:11:02
Still facing this problem. Has anyone else encountered something similar? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I also face this problem. How to solve it?
transformers
4,891
closed
🐛 [TFTrainer] `dataloader_drop_last` unused
# 🐛 Bug The argument `dataloader_drop_last` appear to not be used in `TFTrainer`. This is a problem when we need a static batch size. https://github.com/huggingface/transformers/blob/e8db8b845a971b0cf63a0896b9deb5b316028a8b/src/transformers/trainer_tf.py#L68-L73 ## Expected behavior The argument `dataloader_drop_last` is used when batching the dataset.
06-10-2020 05:24:15
06-10-2020 05:24:15
Thanks for bringing this up, @Colanim! I worked on #4757, and didn't realize the same could automatically extend to TFTrainer as well. Please take a look at #4925 to see if it'd solve this for you.
transformers
4,890
closed
encode_plus( ) function for the GPT-2 Tokenizer
Hello, From the GPT-2 Tokenizer section of the Hugging Face Transformer documentation, the documentation says that: ``` GPT-2 BPE tokenizer. Peculiarities: Byte-level Byte-Pair-Encoding Requires a space to start the input string => the encoding methods should be called with the add_prefix_space flag set to True. Otherwise, this tokenizer encode and decode method will not conserve the absence of a space at the beginning of a string: ``` If I use the `encode_plus()` function (not `encode( )`) to encode my sentences (doing something like `encode_plus("Hi there")['input_ids']`, instead of directly using the `encode()` function), do I still need to place a space at the start of every input string? Thank you,
06-10-2020 01:01:50
06-10-2020 01:01:50
Sure, and you can use the `add_prefix_space` flag to do that.
transformers
4,889
closed
[RFC] Tokenizer.prepare_seq2seq_batch
This introduces a method: `Tokenizer.prepare_seq2seq_batch` that calls batch encode plus twice and prepares inputs for seq2seq models. The seq2seq finetuning example and some seq2seq unittests calls batch encode plus twice. This seems like it should be the work of the tokenizer, and MarianTokenizer, BartTokenizer, T5 Tokenizer can expose/ overwrite this method. Wondering what others think before I add tests/fix callers.
06-10-2020 00:58:14
06-10-2020 00:58:14
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=h1) Report > Merging [#4889](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e603cb7892b49a2cbbc10ba859759f92c3fb7a6&el=desc) will **decrease** coverage by `0.03%`. > The diff coverage is `16.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4889/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4889 +/- ## ========================================== - Coverage 77.00% 76.96% -0.04% ========================================== Files 128 128 Lines 21602 21614 +12 ========================================== + Hits 16634 16636 +2 - Misses 4968 4978 +10 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `88.68% <16.66%> (-1.02%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (ø)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.49% <0.00%> (+0.40%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=footer). Last update [6e603cb...560abea](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I don't think this method is really necessary. Also, it doesn't allow to prepare decider_inputs in batches since kwargs is only given to then encoder_inputs<|||||>I agree with @patrickvonplaten . Also I'm currently removing `trim_batch` from `tokenization_utils` since only the BART summarization example uses it. I think it's better to keep all these small helpers in examples scripts unless you find other members of the team interested in using them as well or unless you can propose a larger modification of the general API to incorporate them seamlessly in the general abstractions we use (tokenizers, etc.).<|||||>I think @joeddav used `trim_batch`, but fine to keep it in examples/
transformers
4,888
closed
Previous commit introduces bug in `convert_pytorch_checkpoint_to_tf2.py`
This issue concerns the conversion tool `convert_pytorch_checkpoint_to_tf2.py`. Commit d4c2cb402d6674211726fd5f4803d1090664e438 removed the `*PRETRAINED_MODEL_ARCHIVE_MAP` from the imports so that for each key in `MODEL_CLASSES` is associated to a set of 4 values. For instance: https://github.com/huggingface/transformers/blob/e8db8b845a971b0cf63a0896b9deb5b316028a8b/src/transformers/convert_pytorch_checkpoint_to_tf2.py#L109 However, in `convert_all_pt_checkpoints_to_tf`, `MODEL_CLASSES` is still expected to unpack 5 values (among which the model maps) which raise an error: https://github.com/huggingface/transformers/blob/e8db8b845a971b0cf63a0896b9deb5b316028a8b/src/transformers/convert_pytorch_checkpoint_to_tf2.py#L259 Typical command is: ```bash python src/transformers/convert_pytorch_checkpoint_to_tf2.py \ --tf_dump_path serialization_dir/weights_release/1st_weight_release/prunebert-base-uncased-6-finepruned-w-distil-squad/ \ --model_type bert-large-uncased-whole-word-masking-finetuned-squad \ --pytorch_checkpoint_path /serialization_dir/weights_release/1st_weight_release/prunebert-base-uncased-6-finepruned-w-distil-squad/ \ --compare_with_pt_model ``` I didn't really follow why the `*PRETRAINED_MODEL_ARCHIVE_MAP` were removed so I'm not sure what is the best course of action here. Victor
06-10-2020 00:24:57
06-10-2020 00:24:57
I encountered the same problem. If you were converting a local `pytorch_model.bin` model, you can try this somewhere around line 258 in `convert_pytorch_checkpoint_to_tf2.py` ``` aws_model_maps = {} config_class, model_class, pt_model_class, aws_config_map = MODEL_CLASSES[model_type] # config_class, model_class, pt_model_class, aws_model_maps, aws_config_map = MODEL_CLASSES[model_type] ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,887
closed
warn with FutureWarning when using `output_attentions` in the configu…
…ration
06-09-2020 23:07:24
06-09-2020 23:07:24
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=h1) Report > Merging [#4887](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/13aa174112f0c2ee794c44188ecf13b241694db0&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `80.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4887/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4887 +/- ## ========================================== + Coverage 76.97% 76.99% +0.01% ========================================== Files 128 128 Lines 21602 21607 +5 ========================================== + Hits 16629 16636 +7 + Misses 4973 4971 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4887/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.80% <80.00%> (-0.58%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4887/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4887/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.94%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=footer). Last update [13aa174...513ba3b](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Are we actually planning on completely removing `output_attentions` from the config? I changed my mind a bit in that I think we can keep the hierarchy 1. use function argument 2. if nothing is provided => use config parameter as is done for the generation arguments. Also another small advantage of keeping it in the config would be that attentions can easily be outputted when using torchscript. What is your opinion on that @thomwolf ?<|||||>Ah, I wasn't aware of that, I thought we were deprecating them to be later removed :sweat_smile: In that case, we should add back the documentation regarding `output_attentions` that [was removed](https://github.com/huggingface/transformers/commit/6e603cb7892b49a2cbbc10ba859759f92c3fb7a6#diff-0f9b535706b4f09eb22f7189c6c9039cL46-L49) in #4538.<|||||>I wanted to do a PR about this and also add `use_cache` correctly back to the configs<|||||>Cool, sounds good @patrickvonplaten
transformers
4,886
closed
Deal with multiple choice in common tests
It's a bit heavy but I didn't find another way to reshape the inputs when needed for the multiple choice model. With this, and skipping the input_embeds test when the model is a multiple choice one (current implementation requires `input_ids`), I manage to have the common tests passing for `BertForMultipleChoice`. Let me know if you have other ideas!
06-09-2020 21:43:22
06-09-2020 21:43:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=h1) Report > Merging [#4886](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02e5f79662d72cccdca81a47e3001a5f6d36e5b1&el=desc) will **increase** coverage by `0.09%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4886/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4886 +/- ## ========================================== + Coverage 76.46% 76.56% +0.09% ========================================== Files 128 128 Lines 21502 21502 ========================================== + Hits 16442 16463 +21 + Misses 5060 5039 -21 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (+10.04%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=footer). Last update [02e5f79...b78ed3a](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,885
closed
Add AlbertForMultipleChoice
Another [model missing](https://github.com/huggingface/transformers/projects/17). While implementing it I noticed two things: - the example in `BertForMultipleChoice` wasn't working, so fixed it. - some model classes where missing in the `all_model_classes` in the test, fixed it for albert and bert, will look at the other tests in a separate PR dedicated to that.
06-09-2020 20:23:02
06-09-2020 20:23:02
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=h1) Report > Merging [#4885](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f0340b30310cb78555c6f78bed7262101f251940&el=desc) will **increase** coverage by `0.10%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4885/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4885 +/- ## ========================================== + Coverage 76.47% 76.57% +0.10% ========================================== Files 128 128 Lines 21502 21528 +26 ========================================== + Hits 16443 16486 +43 + Misses 5059 5042 -17 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `73.60% <ø> (ø)` | | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.27% <ø> (ø)` | | | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `77.68% <100.00%> (+1.34%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.58% <0.00%> (-1.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.28% <0.00%> (ø)` | | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (+10.04%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=footer). Last update [f0340b3...09b0fd5](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This needs rework now that #4921 has been merged, so do not merge just yet.<|||||>Ugh, rebase went wrong, closing...
transformers
4,884
closed
Fix a bug in the initialization and serialization of TFRobertaClassificationHead
For `TFRobertaClassificationHead`, `config` was being passed as the first parameter to the `__init__` of the parent class `tf.keras.layers.Layer`. The latter [expects](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer) trainable as the first parameter. This fixes #4709 and #3664, making the TFRoberta models serializable to `savedmodel` format too.
06-09-2020 19:55:50
06-09-2020 19:55:50
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=h1) Report > Merging [#4884](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02e5f79662d72cccdca81a47e3001a5f6d36e5b1&el=desc) will **increase** coverage by `0.02%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4884/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4884 +/- ## ========================================== + Coverage 76.46% 76.49% +0.02% ========================================== Files 128 128 Lines 21502 21502 ========================================== + Hits 16442 16448 +6 + Misses 5060 5054 -6 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `74.74% <100.00%> (ø)` | | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `55.55% <0.00%> (-33.34%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `86.48% <0.00%> (-6.31%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.87% <0.00%> (-0.57%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.46% <0.00%> (-0.24%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.28% <0.00%> (ø)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (+10.04%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=footer). Last update [02e5f79...9693577](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ahah @LysandreJik you have been too fast :smile: <|||||>Not only this class should have been updated but all that inherit directly from `tf.keras.layers.Layer`. I will do the rest :) Thanks a lot @harkous very nice catch!!!<|||||>Thanks! @jplu I tried to verify whether other classes that inherit directly from `tf.keras.layers.Layer` have the same issue but couldn't find any that directly passes `config`. Feel free to double check though.<|||||>Great, thanks a lot @jplu !<|||||>Oh I didn't know you have already checked that. It is more more than perfect then!! Sorry for my previous post, my bad.
transformers
4,883
closed
check type before logging in trainer to ensure values are scalars
This change was necessary to avoid https://github.com/lanpa/tensorboardX/issues/567 since a non-scalar value was being passed in.
06-09-2020 18:56:48
06-09-2020 18:56:48
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=h1) Report > Merging [#4883](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f5d5a531d769d07403f59661884e254f8420afe&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `66.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4883/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4883 +/- ## ======================================= Coverage 76.55% 76.56% ======================================= Files 128 128 Lines 21502 21504 +2 ======================================= + Hits 16461 16464 +3 + Misses 5041 5040 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4883/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.67% <66.66%> (+0.99%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4883/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.28% <0.00%> (-0.32%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=footer). Last update [9f5d5a5...9c9d9c3](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> Wouldn't the more robust fix be to change these values to scalars if they're not? > > In the case were a string is passed (which seems to be your case), I think a warning would be better than a silently not registering anything. What do you think? I agree that logging the non-scalar values would be an improvement, and I'll update this PR to that effect. I haven't characterized what the exact error is, so I'm not sure that we can even cast the troublesome values to scalars. Thank you very much for the good feedback here @LysandreJik.<|||||>@LysandreJik please feel free to suggest a different log level or log message from that in a33e28b
transformers
4,882
closed
fix huggingface/tokenizers#297 in 0.8.0
06-09-2020 18:27:22
06-09-2020 18:27:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=h1) Report > Merging [#4882](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02e5f79662d72cccdca81a47e3001a5f6d36e5b1&el=desc) will **increase** coverage by `0.09%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4882/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4882 +/- ## ========================================== + Coverage 76.46% 76.56% +0.09% ========================================== Files 128 128 Lines 21502 21502 ========================================== + Hits 16442 16462 +20 + Misses 5060 5040 -20 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.69% <0.00%> (ø)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.12% <0.00%> (-0.16%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (+10.04%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=footer). Last update [02e5f79...1ce1fd3](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,881
closed
Fix TensorFlow dataset generator
Should fix #4856 The method `glue_convert_examples_to_features` returns a badly formated TensorFlow dataset in case the used model doesn't use `token_type_ids` as feature such as DistilBert. The fix is to detect when the feature `token_type_ids` should belong to the TensorFlow dataset or not. I'm not really happy of the fix, @julien-c and @LysandreJik do you have a better way to handle this? Note: do not forget that the same fix should be applied for the other `xxx_examples_to_features` for other datasets processor in `src/data/processors`.
06-09-2020 18:17:21
06-09-2020 18:17:21
Nice!! I didn't know that parameter ^^ Does it seems better now?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=h1) Report > Merging [#4881](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02e5f79662d72cccdca81a47e3001a5f6d36e5b1&el=desc) will **increase** coverage by `0.07%`. > The diff coverage is `7.69%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4881/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4881 +/- ## ========================================== + Coverage 76.46% 76.54% +0.07% ========================================== Files 128 128 Lines 21502 21511 +9 ========================================== + Hits 16442 16465 +23 + Misses 5060 5046 -14 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.21% <0.00%> (-0.45%)` | :arrow_down: | | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <20.00%> (-0.36%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (+10.04%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=footer). Last update [02e5f79...2060038](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ok, if this one is ok I'm gonna update the other methods the same way.<|||||>Should be ok now.<|||||>@julien-c @thomwolf @LysandreJik any issue to merge this?<|||||>@LysandreJik anything else to merge?
transformers
4,880
closed
AutoModelForSequenceClassification not working with prunebert model
I am having issues loading the new prunebert model for sequence classification using AutoModelForSequenceClassification.from_pretrained(). ``` from transformers import AutoModelForSequenceClassification, BertForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("huggingface/prunebert-base-uncased-6-finepruned-w-distil-mnli") ``` The above code produces the error: KeyError: 'masked_bert'. The model loads fine using BertForSequenceClassification.from_pretrained, the issue only seems to occur with AutoModelForSequenceClassification. ``` model = BertForSequenceClassification.from_pretrained('huggingface/prunebert-base-uncased-6-finepruned-w-distil-mnli') ```
06-09-2020 18:11:52
06-09-2020 18:11:52
This is because the `huggingface/prunebert-xxx` configurations have a model type `masked_bert`. Since these files should be loaded directly in `BertForXXX` classes, it would probably be best to update that field to `bert`, right @julien-c?<|||||>Yes. Can you confirm @VictorSanh?<|||||>Seems to be working fine now, thanks @LysandreJik and @julien-c
transformers
4,879
closed
[Draft] Prevent KeyError in QA pipeline
closes #4873 With the question answering pipeline, sometimes the model selects an answer that is out of bounds. This ensures that it will select the maximum token it can select, and prevents KeyErrors from happening
06-09-2020 17:07:37
06-09-2020 17:07:37
transformers
4,878
closed
BartTokenizerFast
This PR adds `BartTokenizerFast` by subclassing `RobertaTokenizerFast` @sshleifer @mfuntowicz
06-09-2020 16:51:33
06-09-2020 16:51:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=h1) Report > Merging [#4878](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/86578bb04c9b34f9d8e35cd4fad42a85910dd9e9&el=desc) will **decrease** coverage by `0.38%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4878/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4878 +/- ## ========================================== - Coverage 77.55% 77.16% -0.39% ========================================== Files 128 128 Lines 21791 21794 +3 ========================================== - Hits 16899 16818 -81 - Misses 4892 4976 +84 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ø)` | | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.12% <100.00%> (+0.38%)` | :arrow_up: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.73% <0.00%> (+0.46%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=footer). Last update [86578bb...1752816](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I think we will need a test, there are no tests for `RobertaTokenizerFast` in `test_tokenization_roberta.py` file.<|||||>The test is in `test_tokenization_fast`, so I think we're OK on that front. Going to wait for @n1t0 or @mfuntowicz to approve and merge, because they are working on this concurrently.
transformers
4,877
closed
ProphetNet
# 🌟 New model addition ProphetNet ## Model description ProphetNet introduces a novel self-supervised objective named future n-gram prediction and the proposed n stream self-attention mechanism. Instead of the optimization of one-step-ahead prediction in the traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations <!-- Important information --> ## Open source status * [X] the model implementation is available: (give details) https://github.com/microsoft/ProphetNet * [X] the model weights are available: (give details) Weights for both small and large pre-trained dataset version of models are available https://github.com/microsoft/ProphetNet#pre-trained-models * [X] who are the authors: (mention them, if possible by @gh-username) Yu Yan @yuyan2do , Weizhen Qi @weizhen
06-09-2020 16:18:55
06-09-2020 16:18:55
@aretius Thank you for mentioning ProphetNet. ProphetNet for huggingface is sheduled as you suggested. <|||||>@qiweizhen this sounds great, I would love to give it a go. Any planned date for delivering this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,876
closed
[examples] Cleanup summarization docs
Don't think we need a download_cnn_dailymail.py script
06-09-2020 15:19:56
06-09-2020 15:19:56
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=h1) Report > Merging [#4876](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02e5f79662d72cccdca81a47e3001a5f6d36e5b1&el=desc) will **decrease** coverage by `0.59%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4876/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4876 +/- ## ========================================== - Coverage 76.46% 75.86% -0.60% ========================================== Files 128 128 Lines 21502 21502 ========================================== - Hits 16442 16313 -129 - Misses 5060 5189 +129 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4876/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `28.15% <0.00%> (-63.03%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4876/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (+10.04%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=footer). Last update [02e5f79...954557d](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,875
closed
Inconsistent number of vocab from pretrained T5Tokenizer and T5ForConditionalGeneration
# ❓ Questions & Help Pretrained `T5Tokenizer ` has vocab size of 32100 (32000 tokens plus 100 extra_ids) but the shared embedding layer of `T5ForConditionalGeneration` has size of (32128, 768). I checked the google-research implementation of T5 and also found that they have vocab size of 32100 also. Where did the extra 28 embeddings come from and how can we map it to the tokenizer? ## To reproduce ``` from transformers import ( T5Tokenizer, T5ForConditionalGeneration, ) tokenizer_pretrained = T5Tokenizer.from_pretrained('t5-base') model_pretrained = T5ForConditionalGeneration.from_pretrained('t5-base') len(tokenizer_pretrained.get_vocab()), model_pretrained.state_dict()['shared.weight'].shape ``` Output: ``` (32100, torch.Size([32128, 768])) ```
06-09-2020 14:18:42
06-09-2020 14:18:42
Hey @cstorm125, I think, those `28` leftover embeddings are simply not used. The reason why the embedding matrix is of length 32128 as far as I know is simply because 32128 is a more GPU friendly number `32128 = 128 * 251` than `32100 = 4 * 8025`. That means that the GPU is probably more efficient if it can directly deal with a power of 2 shape. Also see: https://www.quora.com/Why-should-I-choose-a-mini-batch-size-of-32-64-128-256-etc-i-e-a-power-of-two-and-not-a-size-of-50-100-500-1000-Is-there-any-benefit-of-choosing-power-of-two-mini-batch-sizes <|||||>Hi all, I ran into this too. But I did find a bug as a result of this mismatch. I try to resize the embedding to be smaller and got a Cuda assert error. See bug report. https://github.com/huggingface/transformers/issues/8643 <|||||>I found this mismatch recently and I think this may result in many bugs. Wish someone can fix it.<|||||>> Hey @cstorm125, > > I think, those `28` leftover embeddings are simply not used. The reason why the embedding matrix is of length 32128 as far as I know is simply because 32128 is a more GPU friendly number `32128 = 128 * 251` than `32100 = 4 * 8025`. That means that the GPU is probably more efficient if it can directly deal with a power of 2 shape. > > Also see: https://www.quora.com/Why-should-I-choose-a-mini-batch-size-of-32-64-128-256-etc-i-e-a-power-of-two-and-not-a-size-of-50-100-500-1000-Is-there-any-benefit-of-choosing-power-of-two-mini-batch-sizes This is wrong. It shouldn't be this way. In case model predicts wong index and when you calculate loss, it will cause serious issues. Its hard to believe no one cares this. <|||||>Hey @s4sarath, During training all input_ids and labels are defined by the tokenizer. If the tokenizer has a vocab_size of 32000 there is no way that it will tokenize to an id >= 32000 neither for `input_ids` nor for `labels`. Because no label ever has an id >= 32000 the model learns to never predict those ids. I don't really see a problem with this to be honest<|||||>Hi Patrick, Thanks for the reply. If the embedding matrix is 32128 x d , for an example if the predicted id is say 32099, if we are using Sentencepiece tokenizer ( not huggingface ), it will fail to decode that. And special tokens ( 100 tokens ) are added extra, right. Which are actually not a part of official sentecepice model. That's why I told, it shouldn't be that way. Thanks anyway, I really appreciate your reply.:-) On Fri, 10 Dec, 2021, 7:11 pm Patrick von Platen, ***@***.***> wrote: > Hey @s4sarath <https://github.com/s4sarath>, > > During training all input_ids and labels are defined by the tokenizer. If > the tokenizer has a vocab_size of 32000 there is no way that it will > tokenize to an id >= 32000 neither for input_ids nor for labels. Because > no label ever has an id >= 32000 the model learns to never predict those > ids. I don't really see a problem with this to be honest > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/4875#issuecomment-990983491>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ACRE6KAZNTAF4DX5ZXOHUH3UQH7P5ANCNFSM4NZOPKUQ> > . > Triage notifications on the go with GitHub Mobile for iOS > <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> > or Android > <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>. > > <|||||>Upvoting this. Another subtle bug this causes is when doing prompt tuning. The common way to do it is to call `add_tokens` to add some special prompt tokens, and also create a special embedding class that consists of two embedding matrics, the original one + one for the prompt tokens, and the forward call simply indexes into the two matrices concatenated together. Then all parameters but the prompt token embedding matrix are frozen. The expected behavior is that the IDs of the added tokens correspond to the prompt token embeddings when concatenated with the original. However, this mismatch causes the tokenizer to assign IDs starting from 32100, which are still a part of the original embedding matrix, which doesn't get gradients.<|||||>Temporary Solution : `model.resize_token_embeddings(len(tokenizer))` <|||||>I just found that it sometimes generates > 32100 input ids in generate function. Especially that happens if I evaluate a fine-tuned model in the very early step while training. Thanks, @Darshan2104 ! model.resize_token_embeddings(len(tokenizer)) temporally resolves my issue.<|||||>I am also facing the `IndexError: index out of range in self` issue due to this difference between the vocab size in the t5 tokenizer and the model for ConditionalGeneration. Should I resize model token_embeddings?<|||||>> model.resize_token_embeddings(len(tokenizer) I tried this but not helping.<|||||>> > model.resize_token_embeddings(len(tokenizer) > > I tried this but not helping. @kanak8278 , could you double-check that you are using the right tokenizer for the model? For the model, could you show me what happens when you run this code? ```python {n:p.shape for n, p in model.named_parameters() if "embedding" in n} ``` For the tokenizer, could you do `len(tokenizer)` and report what it says? And then could you do this on your input ids? `torch.tensor(input_ids).max()`<|||||>This is a bit troubling, especially because I'm only interested in using a model for inference. I'm generating some sequences using multinomial sampling from `pythia-70M` model. When I attempt to obtain the corresponding scores to this sequence, I obtain a CUDA assertion (which, when running in the CPU, reveals itself as an indexing error). Upon checking the size of the model and the tokenizer, I find these are different, and although I understand @patrickvonplaten's justification, I am not sure how to proceed in terms of replacing these tokens, the fact is that they are being selected during the random sampling (even though they shouldn't since they were learned...). The other troubling problem of having a model head greater than the vocab size is that, by definition, these tokens will still contain some probability mass.<|||||>@PastelBelem8 The model was never incentivized to predict those tokens so the weights for the tokens with ids > len(tokenizer) will have extraordinarily low scores. I did a quick test and the scores for those extra tokens summed together to be on the order of 1e-30 for each token. That is basically 0. Could you share your sampling approach?<|||||>Never mind, it was an error on my end! I apologize for the confusion! I thought I had tried everything and was desperate.
transformers
4,874
closed
Split LMBert model in two
As discussed in #4711, the `BertForMaskedLM` model should be split in two to avoid having two different labels argument, one model for causal LM, one for masked LM. This PR follows up on that and does the split. It introduces a new `BertLMHeadModel` (also added to the `__init__` and the docs) with a test. As discussed, there is no deprecation warning if someone tries to use the `lm_labels` in `BertForMaskedLM` (since it was experimental), but an error message telling the user to use `BertLMHeadModel`. I did not add `BertLMHeadModel` in the automodel logic since we probably want users to use causal models for this? Let me know if I should add it even if it's not the best model for that task. I also removed `lm_labels` in the `EncoderDecoderModel` since it was only there to support that argument in `BertForMaskedLM` (which then removes the corresponding test).
06-09-2020 14:01:11
06-09-2020 14:01:11
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=h1) Report > Merging [#4874](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3ae2e86baffc1fea8b8b93695fb5a10941fd63dc&el=desc) will **decrease** coverage by `0.68%`. > The diff coverage is `88.88%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4874/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4874 +/- ## ========================================== - Coverage 77.11% 76.43% -0.69% ========================================== Files 128 128 Lines 21651 21671 +20 ========================================== - Hits 16697 16564 -133 - Misses 4954 5107 +153 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <ø> (ø)` | | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.17% <88.88%> (-0.12%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.56% <0.00%> (-2.58%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.31% <0.00%> (-2.31%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.26% <0.00%> (-1.18%)` | :arrow_down: | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `72.80% <0.00%> (-0.30%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=footer). Last update [3ae2e86...56a698f](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>It's possible to do it in a non-breaking way with a deprecation warning if a not-None `labels_lm` is passed to `BertForMaskedLM`. I was following the discussion of #4711 that implied it was okay to have a breaking change for this.<|||||>I'm fine with this PR. IMO, `BertForMaskedLM` was never really used before for causal language modeling except when using Bert in an encoder-decoder setting and the encoder-decoder code is not really released yet. Also since we keep the same names for the submodules `self.bert` and `self.cls`, there won't be any errors or inconsistencies when loading pre-trained weights into the Bert2Bert encoder-decoder. In my opinion, this change is necessary to have a clean separation between masked lm and causal lm (Reformer and Longformer will eventually run into the same issue). The heavily used `BertForMaskedLM` for the normal masked encoder bert model does not change at all except for `lm_labels`, so that's good in terms of backward compatibility. One thing which is problematic though is that the `MODEL_WITH_LM_HEAD_MAPPING` contains a mixture of causal models and masked encoder models at the moment: https://github.com/huggingface/transformers/blob/02e5f79662d72cccdca81a47e3001a5f6d36e5b1/src/transformers/modeling_auto.py#L187 Now since Bert has both a causal model and a masked encoder model we need two mappings. I would suggest here to create 2 new mappings `MODEL_FOR_MASKED_LM_MAPPING` and `MODEL_FOR_CAUSAL_LM_MAPPING` and two new AutoModels: `AutoModelForMaksedLM` , `AutoModelForCausalLM` and for now keep `AutoModelWithLMHead` as it is and add a depreciated warning to it. We can add `BertLMHeadModel` to `MODEL_FOR_CAUSAL_LM_MAPPING` and change to `AutoModelForCausalLM` in the encoder-decoder model. Also @thomwolf and @julien-c here<|||||>I agree with @patrickvonplaten on the need to split `AutoModelWithLMHead` in two. Note that if the name `AutoModelForCausalLM` is picked, we should then rename (with a deprecation first of course) all `ModeltypeLMHeadModel` to `ModeltypeForCausalLM` for consistency (and clarity since just saying it has an LM head doesn't tell us if it's intended to be masked or causal).<|||||>I agree that having two additional `AutoXXX` classes for the distinction between masked/causal would be nice. We should, however, keep the `AutoModelWithLMHead` class available for backwards compatibility. I don't agree with renaming all causal model with language modeling heads `XXXForCausalLM`. It would be more consistent, but is an aesthetic change with a very big breaking change. Even adding aliases to keep backwards compatibility would create a large overhead for the user, in my opinion, as all those classes would exist twice when importing from the library.<|||||>In that case I would advocate to keep `AutoModelWithLMHead` for causal language models and only add an `AutoModelForMaskedLM`. Consistency is cosmetic, I agree, but it also helps not confusing beginners.<|||||>1) For now, I think the best solution would be to keep `AutoModelForMaskedLM` as it is and add two new `AutoXXX` classes. The EncoderDecoderModel would be the first model to use `AutoModelForCausalLM` in its code. `AutoModelWithLMHead` is heavily used for all kinds of masked bert encoder models, so if we create an `AutoModelForMaskedLM` and move `BertForMaskedLM` there, we would have a lot of breaking change. I think we could add a depreciation warning to `AutoModelWithLMHead` though. 2) I'm a bit indifferent to renaming all other model classes. While I'm also a big fan of consistency I agree with @LysandreJik in that I think it's a big user-facing API change that is not really urgent atm.<|||||>In the short term, I would advocate only exposing the classical "masked-lm" flavour of BERT through AutoModelWithLMHead (as is done in this PR), and not even documenting/adding BertLMHeadModel to the `__init__`, as it's only used as a building block to other models. In the longer term, I'd be ok with creating `AutoModelFor{Masked,Causal}LM` (name TBD for the second one) and not even creating a deprecation for `AutoModelWithLMHead`, forcing users to explicitly choose one or the other. This would need to be a major release though.<|||||>@julien-c as long as we do a major release for the AutoModel renaming, I'm all for this!<|||||>> In the short term, I would advocate only exposing the classical "masked-lm" flavour of BERT through AutoModelWithLMHead (as is done in this PR), and not even documenting/adding BertLMHeadModel to the `__init__`, as it's only used as a building block to other models. > > In the longer term, I'd be ok with creating `AutoModelFor{Masked,Causal}LM` (name TBD for the second one) and not even creating a deprecation for `AutoModelWithLMHead`, forcing users to explicitly choose one or the other. This would need to be a major release though. For the encoder decoder models, I think we need `BertLMHeadModel` in the `init` and we would also need a `AutoModelForCausalLM`. Here: https://github.com/huggingface/transformers/blob/29c36e9f3678702e5ffd3fe2f1c9f6c1d6672578/src/transformers/modeling_encoder_decoder.py#L160 we need to instantiate a `BertWithCausalLM`<|||||>I'm fine either way, I think you guys got all the important issues (backward compatibility versus cleanly building the future). I like what @patrickvonplaten and @julien-c are proposing.<|||||>Fixed conflicts and followed @julien-c advice. @LysandreJik or @patrickvonplaten, could you do one final review just to make sure everything is fine to merge?<|||||>This currently breaks the encoder-decoder framework `from_encoder_decoder_pretrained()` method. Will do a PR tomorrow to fix it.
transformers
4,873
closed
KeyError in Camembert in QuestionAnsweringPipeline
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): `Camembert ("illuin/camembert-large-fquad")` Language I am using the model on (English, Chinese ...): `French` The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) Maybe related to this issue : https://github.com/huggingface/transformers/issues/4674 ## To reproduce Context file : [context_mono.txt](https://github.com/huggingface/transformers/files/4752416/context_mono.txt) ``` import torch from transformers import pipeline def analayse(): if torch.cuda.is_available() == True: print('GPU is available') device = 0 else: print('GPU is not available') device = -1 nlp_camembert_gpu_f = pipeline("question-answering", model='illuin/camembert-large-fquad', tokenizer='illuin/camembert-large-fquad', device=device) context = '' with open('context_mono.txt') as file: context_lines = [line for line in file] for line in context_lines: context += line answer_C = nlp_camembert_gpu_f(question='Le loyer est-il révisé annuellement ou triennalemment ?', context=context) def main_file(): analayse() if __name__ == '__main__': main_file() ``` ## Error trace ``` Traceback (most recent call last): File "qa_bug.py", line 26, in <module> main_file() File "qa_bug.py", line 23, in main_file analayse() File "qa_bug.py", line 20, in analayse answer_C = nlp_camembert_gpu_f(question='Le loyer est-il révisé annuellement ou triennalemment ?', context=context) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py", line 1229, in __call__ for s, e, score in zip(starts, ends, scores) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py", line 1229, in <listcomp> for s, e, score in zip(starts, ends, scores) KeyError: 377 ``` ## Expected behavior Getting an answer. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Kernel: 5.3.0-1019-aws x86_64 Distro: Ubuntu 18.04.4 LTS - Python version: 3.7.7 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): (False) - Using GPU in script?: Yes (same problem on CPU) - Using distributed or parallel set-up in script?: No
06-09-2020 13:31:41
06-09-2020 13:31:41
Thank you for your fast answer. Your patch seems to fix some KeyError but not all of them. Here is an example of a context and a question that raise it : [context2.txt](https://github.com/huggingface/transformers/files/4758844/context2.txt) Question : Quel est l'étage se situe les locaux ?<|||||>Indeed, this PR was not the correct fix so I closed it. Will open a new one soon.<|||||>Just for your information, your patch also return an empty response string, but with the right location in the context. Example : [context_empty_response.txt](https://github.com/huggingface/transformers/files/4764895/context_empty_response.txt) Question : ``` Quel est la taille en mètres carrés des locaux ? ``` Thanks for your help<|||||>@LysandreJik Do you have an update on this ? Can I help you in any way ? <|||||>After some research it appears that my problem came from the fact that I was using a model trained with a `max_seq_length` set to 512 but was using the pipeline with this variable set to the default : 384.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,872
closed
Create README.md
06-09-2020 13:30:25
06-09-2020 13:30:25
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=h1) Report > Merging [#4872](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f5d5a531d769d07403f59661884e254f8420afe&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4872/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4872 +/- ## ========================================== + Coverage 76.55% 76.56% +0.01% ========================================== Files 128 128 Lines 21502 21502 ========================================== + Hits 16461 16464 +3 + Misses 5041 5038 -3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4872/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.28% <0.00%> (-0.32%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4872/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4872/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.94%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=footer). Last update [9f5d5a5...aad9cb1](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,871
closed
Create README.md for gpt-2-pubmed-medium
06-09-2020 13:26:13
06-09-2020 13:26:13
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=h1) Report > Merging [#4871](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f5d5a531d769d07403f59661884e254f8420afe&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4871/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4871 +/- ## ======================================= Coverage 76.55% 76.56% ======================================= Files 128 128 Lines 21502 21502 ======================================= + Hits 16461 16462 +1 + Misses 5041 5040 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4871/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.12% <0.00%> (-0.48%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4871/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.94%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=footer). Last update [9f5d5a5...40bc44a](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>cc @LysandreJik, you're going to like this:)
transformers
4,870
closed
readme change
06-09-2020 12:35:15
06-09-2020 12:35:15
transformers
4,869
closed
parse arguments from dict
This PR adds parse_dict method to HfArgumentParser to allow parsing arguments from dict @julien-c As suggested by you here #4791, I've added a simple unit test to check if the dataclass returned by `parse_dict` is same as manually initialised one.
06-09-2020 12:14:44
06-09-2020 12:14:44
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=h1) Report > Merging [#4869](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f5d5a531d769d07403f59661884e254f8420afe&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4869/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4869 +/- ## ========================================== + Coverage 76.55% 76.57% +0.01% ========================================== Files 128 128 Lines 21502 21510 +8 ========================================== + Hits 16461 16471 +10 + Misses 5041 5039 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/hf\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/4869/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `69.23% <100.00%> (+2.96%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4869/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.12% <0.00%> (-0.48%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4869/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4869/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.94%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=footer). Last update [9f5d5a5...4461f41](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi @LysandreJik , what do you think about this ? If it's not really necessary, I will close the PR. Thanks!
transformers
4,868
closed
tokenizer.encode_plus stopped returning `attention_mask` and pad_to_max_length
# 🐛 Bug tokenizer.encode_plus stopped returning `attention_mask` and pad_to_max_length ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): English The problem arises when using: my own modified scripts: (give details below) The tasks I am working on is: my own task or dataset: (give details below) ## To reproduce import torch import pandas as pd # If there's a GPU available... if torch.cuda.is_available(): # Tell PyTorch to use the GPU. device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) # If not... else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") # Load the dataset into a pandas dataframe. df = pd.read_csv("/home/shikhar_singla/Downloads/cola_public/raw/in_domain_train.tsv", delimiter='\t', header=None, names=['sentence_source', 'label', 'label_notes', 'sentence']) # Report the number of sentences. print('Number of training sentences: {:,}\n'.format(df.shape[0])) # Display 10 random rows from the data. df.sample(10) df.loc[df.label == 0].sample(5)[['sentence', 'label']] # Get the lists of sentences and their labels. sentences = df.sentence.values labels = df.label.values from transformers import BertTokenizer # Load the BERT tokenizer. print('Loading BERT tokenizer...') tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) # Print the original sentence. print(' Original: ', sentences[0]) # Print the sentence split into tokens. print('Tokenized: ', tokenizer.tokenize(sentences[0])) # Print the sentence mapped to token ids. print('Token IDs: ', tokenizer.convert_tokens_to_ids(tokenizer.tokenize(sentences[0]))) max_len = 0 # For every sentence... for sent in sentences: # Tokenize the text and add `[CLS]` and `[SEP]` tokens. input_ids = tokenizer.encode(sent, add_special_tokens=True) # Update the maximum sentence length. max_len = max(max_len, len(input_ids)) print('Max sentence length: ', max_len) # Tokenize all of the sentences and map the tokens to thier word IDs. input_ids = [] attention_masks = [] # For every sentence... for sent in sentences: # `encode_plus` will: # (1) Tokenize the sentence. # (2) Prepend the `[CLS]` token to the start. # (3) Append the `[SEP]` token to the end. # (4) Map tokens to their IDs. # (5) Pad or truncate the sentence to `max_length` # (6) Create attention masks for [PAD] tokens. encoded_dict = tokenizer.encode_plus( sent, # Sentence to encode. add_special_tokens = True, # Add '[CLS]' and '[SEP]' max_length = 64, # Pad & truncate all sentences. pad_to_max_length = True, return_attention_mask = True, # Construct attn. masks. return_tensors = 'pt', # Return pytorch tensors. ) # Add the encoded sentence to the list. input_ids.append(encoded_dict['input_ids']) # And its attention mask (simply differentiates padding from non-padding). attention_masks.append(encoded_dict['attention_mask']) # Convert the lists into tensors. input_ids = torch.cat(input_ids, dim=0) attention_masks = torch.cat(attention_masks, dim=0) labels = torch.tensor(labels) # Print sentence 0, now as a list of IDs. print('Original: ', sentences[0]) print('Token IDs:', input_ids[0]) There are 1 GPU(s) available. We will use the GPU: GeForce RTX 2080 Ti Number of training sentences: 8,551 To use data.metrics please install scikit-learn. See https://scikit-learn.org/stable/index.html Loading BERT tokenizer... Original: Our friends won't buy this analysis, let alone the next one we propose. Tokenized: ['our', 'friends', 'won', "'", 't', 'buy', 'this', 'analysis', ',', 'let', 'alone', 'the', 'next', 'one', 'we', 'propose', '.'] Token IDs: [2256, 2814, 2180, 1005, 1056, 4965, 2023, 4106, 1010, 2292, 2894, 1996, 2279, 2028, 2057, 16599, 1012] Max sentence length: 47 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-1-e10b3c7561a8> in <module> 72 # (5) Pad or truncate the sentence to `max_length` 73 # (6) Create attention masks for [PAD] tokens. ---> 74 encoded_dict = tokenizer.encode_plus( 75 sent, # Sentence to encode. 76 add_special_tokens = True, # Add '[CLS]' and '[SEP]' ~/anaconda3/envs/bert_gpu_torch/lib/python3.8/site-packages/transformers/tokenization_utils.py in encode_plus(self, text, text_pair, add_special_tokens, max_length, stride, truncation_strategy, return_tensors, **kwargs) 784 raise ValueError("Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.") 785 --> 786 first_ids = get_input_ids(text) 787 second_ids = get_input_ids(text_pair) if text_pair is not None else None 788 ~/anaconda3/envs/bert_gpu_torch/lib/python3.8/site-packages/transformers/tokenization_utils.py in get_input_ids(text) 776 def get_input_ids(text): 777 if isinstance(text, six.string_types): --> 778 return self.convert_tokens_to_ids(self.tokenize(text, **kwargs)) 779 elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], six.string_types): 780 return self.convert_tokens_to_ids(text) ~/anaconda3/envs/bert_gpu_torch/lib/python3.8/site-packages/transformers/tokenization_utils.py in tokenize(self, text, **kwargs) 647 648 added_tokens = list(self.added_tokens_encoder.keys()) + self.all_special_tokens --> 649 tokenized_text = split_on_tokens(added_tokens, text) 650 return tokenized_text 651 ~/anaconda3/envs/bert_gpu_torch/lib/python3.8/site-packages/transformers/tokenization_utils.py in split_on_tokens(tok_list, text) 642 text_list = tokenized_text 643 --> 644 return sum((self._tokenize(token, **kwargs) if token not \ 645 in self.added_tokens_encoder and token not in self.all_special_tokens \ 646 else [token] for token in tokenized_text), []) ~/anaconda3/envs/bert_gpu_torch/lib/python3.8/site-packages/transformers/tokenization_utils.py in <genexpr>(.0) 642 text_list = tokenized_text 643 --> 644 return sum((self._tokenize(token, **kwargs) if token not \ 645 in self.added_tokens_encoder and token not in self.all_special_tokens \ 646 else [token] for token in tokenized_text), []) TypeError: _tokenize() got an unexpected keyword argument 'pad_to_max_length' ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.1.1 - Platform: Ubuntu 20.04 - Python version: 3.8.3 - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
06-09-2020 12:02:10
06-09-2020 12:02:10
Hello! You're using `transformers` version 2.1.1, which didn't have all of these features, as you can see in the [documentation of version 2.1.1](https://huggingface.co/transformers/v2.1.1/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.encode_plus). I would recommend upgrading your `transformers` version to the latest one to have access to all features!
transformers
4,867
closed
run_pplm.py bug fix
`is_leaf` may become `False` after `.to(device=device)` function call.
06-09-2020 12:01:19
06-09-2020 12:01:19
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=h1) Report > Merging [#4867](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f5d5a531d769d07403f59661884e254f8420afe&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4867/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4867 +/- ## ======================================= Coverage 76.55% 76.56% ======================================= Files 128 128 Lines 21502 21502 ======================================= + Hits 16461 16462 +1 + Misses 5041 5040 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.12% <0.00%> (-0.48%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.94%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=footer). Last update [9f5d5a5...26de790](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@songyouwei run_pplm.py Can you follow the Readme steps to success run it? And I report this error? Could you teach me? ![image](https://user-images.githubusercontent.com/49581245/84686252-334a6b80-af6e-11ea-86c5-b973424ac8e7.png)  
transformers
4,866
closed
Funnel Transformers
# 🌟 New model addition Funnel-Transformer ## Model description Funnel-Transformer is a new self-attention model that gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, Funnel-Transformer usually has a higher capacity given the same FLOPs. In addition, with a decoder, Funnel-Transformer is able to recover the token-level deep representation for each token from the reduced hidden sequence, which enables standard pretraining. <!-- Important information --> ## Open source status Released. * [x] the model implementation is available: (give details) https://github.com/laiguokun/Funnel-Transformer * [x] the model weights are available: (give details) https://github.com/laiguokun/Funnel-Transformer * [x] who are the authors: (mention them, if possible by @gh-username) Zihang Dai*, Guokun Lai*, Yiming Yang, Quoc V. Le
06-09-2020 12:01:03
06-09-2020 12:01:03
Duplicate of #4844?<|||||>my bad. yes
transformers
4,865
closed
Create README.md
06-09-2020 11:34:05
06-09-2020 11:34:05
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=h1) Report > Merging [#4865](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f5d5a531d769d07403f59661884e254f8420afe&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4865/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4865 +/- ## ========================================== - Coverage 76.55% 76.54% -0.01% ========================================== Files 128 128 Lines 21502 21502 ========================================== - Hits 16461 16459 -2 - Misses 5041 5043 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4865/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.58% <0.00%> (-1.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4865/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.28% <0.00%> (-0.32%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4865/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.94%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=footer). Last update [9f5d5a5...c1b9024](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,864
closed
Adding 🤗nlp in the examples
This PR examines how to best make use of all the features of 🤗nlp in the examples. First, example studied is GLUE. The main goal is to have a very explicit data processing (target: no data processing happening inside `transformers`) as well as add some efficiency features like dynamic batching. The second goal is to make this a lot more efficient, fast, and reproducible.
06-09-2020 10:40:09
06-09-2020 10:40:09
Closing in favor of #5240
transformers
4,863
closed
how to train mask model e.g Bert using WordPieceToken
# ❓ Questions & Help When i trained a new token by WordPiece,that generized one vocab.txt file . It couldn't load in train-language-model.py ,since in source code it used bytepiece token. Is there a module out of box could save me a lot of time to do myself? <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
06-09-2020 10:05:22
06-09-2020 10:05:22
Hi! Can you load your tokenizer using ```py from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained(directory_containing_vocab_txt) ``` ?<|||||>> > > Hi! Can you load your tokenizer using > > ```python > from transformers import BertTokenizer > > tokenizer = BertTokenizer.from_pretrained(directory_containing_vocab_txt) > ``` > > ? Hi LysandreJik, Great ! It works . May I ask another question-- Do you have any experience on the loss score. Generally what score should be appropriate , my loss around 1.12 is it ok? <|||||>This really depends of your training set and what model you use, what checkpoint you use, etc.
transformers
4,862
closed
how to extract several layers of BERT or GPT as a new model?
How can I, for example, extract 8 layers from the 12 BertLayers of the _bert-base-uncased_ to form a new model? I want to use the _embedding_ and _pooler_ layer of orginal model, but use only a portion of the _encoder_ layers.
06-09-2020 09:21:07
06-09-2020 09:21:07
Interesting use-case! The easiest way would be to simply load to models, one with the `bert-base-cased` checkpoint, the other randomly initialized, and to assign trained layers to the new model. Something like this: ```py from transformers import BertModel, BertConfig import torch bert_base_cased = BertModel.from_pretrained("bert-base-cased") # Instantiate model using the trained weights model = BertModel(BertConfig.from_pretrained("bert-base-cased")) # Randomly initialize model, with the same size as the trained model layers_to_replace = [1, 2, 3, 8] for layer in layers_to_replace: model.base_model.encoder.layer[layer] = bert_base_cased.base_model.encoder.layer[layer] # Let's compare the key values of the attention layers to make sure they're the same i = 0 for original_layer, new_layer in zip(model.base_model.encoder.layer, bert_base_cased.base_model.encoder.layer): original_attention_key = original_layer.attention.self.key.weight new_attention_key = new_layer.attention.self.key.weight difference = (torch.max(torch.abs(original_attention_key - new_attention_key)).item()) print(f"Layers {i} are {'not ' if difference else ''}the same.") i += 1 ``` This outputs: ``` Layers 0 are not the same. Layers 1 are the same. Layers 2 are the same. Layers 3 are the same. Layers 4 are not the same. Layers 5 are not the same. Layers 6 are not the same. Layers 7 are not the same. Layers 8 are the same. Layers 9 are not the same. Layers 10 are not the same. Layers 11 are not the same. ```<|||||>Thanks, but I find this does not work for GPT2LMHeadModel. How could I extract the hidden layers of a GPT2LMHeadModel please?<|||||>You can do the same, but the layers are under `model.base_model.h`: ```py [...] for layer in layers_to_replace: model.base_model.h[layer] = [...] [...] ```<|||||>Thanks @LysandreJik ! I am wondering how you would do this in the keras versions. From tinkering around, I think you access the layers with `model.layers[0].encoder.layer`, since the length of this is 12, so I'm guessing it's for the 12 layers in the Bert model. So you would do something like ``` layers_to_replace = [1, 2, 3, 8] for layer in layers_to_replace: newModel.layers[0].encoder.layer[layer] = trainedModel.layers[0].encoder.layer[layer] ``` Does that seem right to you?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@LysandreJik This solution randomly initialize Embedding layer's weight rather than load from bert pretrained Embedding,which leads to a huge performance decline and confuses me for a week. The correct solution is: ```python from transformers import BertModel, BertConfig import torch bert_version = "bert-base-cased" bert_base_cased = BertModel.from_pretrained(bert_version) # Instantiate model using the trained weights config = BertConfig.from_pretrained(bert_version) model = BertModel(config=config) # Randomly initialize model, with the same size as the trained model # add these two lines model.embeddings = bert_base_cased.embeddings model.pooler = bert_base_cased.pooler layers_to_replace = [1, 2, 3, 8] for layer in layers_to_replace: model.base_model.encoder.layer[layer] = bert_base_cased.base_model.encoder.layer[layer] ``` also,if you just want the first 4 layers, the easier and safer way is: ```python from transformers import BertModel, BertConfig import torch bert_version = "bert-base-cased" bert_base_cased = BertModel.from_pretrained(bert_version) # Instantiate model using the trained weights config = BertConfig.from_pretrained(bert_version) config.num_hidden_layers = 4 model = BertModel.from_pretrained(bert_version, config=config) # auto skip unused layers for param_name in model.state_dict(): sub_param, full_param = model.state_dict()[param_name], bert_base_cased.state_dict()[param_name] # type: torch.Tensor, torch.Tensor assert (sub_param.cpu().numpy() == full_param.cpu().numpy()).all(), param_name ```<|||||>@dalek-who hey I tried to run your code before my other code to construct the model (and this is all in Sagemaker), however got this error: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Here's the code I ran after your code: ``` class BERTClass(torch.nn.Module): def __init__(self): super(BERTClass, self).__init__() self.bert_model = model self.dropout = torch.nn.Dropout(0.5) self.linear = torch.nn.Linear(768, 9) def forward(self, input_ids, attn_mask, token_type_ids): output = self.bert_model( input_ids, attention_mask=attn_mask, token_type_ids=token_type_ids ) output_dropout = self.dropout(output.pooler_output) output = self.linear(output_dropout) return output bert_model = BERTClass() bert_model.to(device) ``` Anyone has any idea why?<|||||>@Bambry When do you get this error? On construct the model, or on forward? `CUDA error: device-side assert triggered` often occurs when a layer receives illegal inputs, for example a `BCELoss` receives a illegal label `3`. Maybe you should check your tensor and parameter's shape, value or dtype.
transformers
4,861
closed
can anyone tell me how to do the pretraining of Reformer model on my text data?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
06-09-2020 08:46:24
06-09-2020 08:46:24
```python from transformers import ReformerModelWithLMHead, ReformerConfig config = ReformerConfig() # define the config as you like model = ReformerModelWithLMHead(config) loss = model(input_ids, labels=input_ids) # input_ids are automatically shifted for labels => train ``` All this can also be done using the trainer. See this notebook for example: https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb
transformers
4,860
closed
ROUGE_L score of summarization/t5 is very lower than that of paper.
# 🐛 Bug ## Information I try to use summarization/t5 in examples. The ROUGE_1 and ROUGE_2 is equals to that of google's paper. But, only ROUGE_L is very low! ``` ROUGE_1: paper=41.12 | my result=40.48 (almost equal) ROUGE_2: paper=19.56 | my result=18.59 (almost equal) ROUGE_L: paper=38.35 | my result=28.22 (very low ?) ``` Model I am using (Bert, XLNet ...): T5 Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Cent OS 7 (64bit) - Python version: 3.7 - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
06-09-2020 07:21:06
06-09-2020 07:21:06
So, I investigated the google's code, then I found: https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/evaluation/metrics.py#L76 I think that they uses not "rougeL" but "rougeLsum". And also, they says: "# Add newlines between sentences so that rougeLsum is computed correctly." https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/evaluation/metrics.py#L82 So, I tried the following hacks: ``` $ g log -p commit 11bd4a086438b100c47e5e2b7e8696fcd67e94d1 Author: Takahiro Ito <[email protected]> Date: Tue Jun 9 14:35:19 2020 +0900 スコア計算の不具合を修正 diff --git a/examples/summarization/t5/evaluate_cnn.py b/examples/summarization/t5/evaluate_cnn.py index d2d6ee9..e1db944 100644 --- a/examples/summarization/t5/evaluate_cnn.py +++ b/examples/summarization/t5/evaluate_cnn.py @@ -44,17 +44,27 @@ def generate_summaries(lns, output_file_path, model_size, batch_size, device): def calculate_rouge(output_lns, reference_lns, score_path): score_file = Path(score_path).open("w") - scorer = rouge_scorer.RougeScorer(["rouge1", "rouge2", "rougeL"], use_stemmer=True) + scorer = rouge_scorer.RougeScorer(["rouge1", "rouge2", "rougeL", "rougeLsum"], use_stemmer=True) aggregator = scoring.BootstrapAggregator() + # copy from + # https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/evaluation/metrics.py#L80 + def _prepare_summary(summary): + # Make sure the summary is not bytes-type + # Add newlines between sentences so that rougeLsum is computed correctly. + summary = summary.replace(" . ", " .\n") + return summary + for reference_ln, output_ln in zip(reference_lns, output_lns): + reference_ln = _prepare_summary(reference_ln) + output_ln = _prepare_summary(output_ln) scores = scorer.score(reference_ln, output_ln) aggregator.add_scores(scores) result = aggregator.aggregate() score_file.write( - "ROUGE_1: \n{} \n\n ROUGE_2: \n{} \n\n ROUGE_L: \n{} \n\n".format( - result["rouge1"], result["rouge2"], result["rougeL"] + "ROUGE_1: \n{} \n\n ROUGE_2: \n{} \n\n ROUGE_L: \n{} \n\n ROUGE_Lsum: \n{} \n\n".format( + result["rouge1"], result["rouge2"], result["rougeL"], result["rougeLsum"] ) ) ``` , and I got a score (37.94), near paper score. Note that: the above my code shows both "rougeL" and "rougeLsum". Question: Why don't your code use "rougeLsum" ? https://github.com/huggingface/transformers/blob/master/examples/summarization/t5/evaluate_cnn.py#L47 I'm sorry, I'm not good at English. I hope some kind people fix this and create PR, thanks. Best,<|||||>P.S. the above hack is based on 41a1d27cdefd6417c298518198f99e3b8431a5c0: ``` $ gglv * commit 11bd4a086438b100c47e5e2b7e8696fcd67e94d1 (HEAD, master) | Author: Takahiro Ito <[email protected]> | Date: Tue Jun 9 14:35:19 2020 +0900 | | スコア計算の不具合を修正 | * commit 41a1d27cdefd6417c298518198f99e3b8431a5c0 (origin/master, origin/HEAD) | Author: Sylvain Gugger <[email protected]> | Date: Mon Jun 8 21:22:37 2020 -0400 ``` <|||||>Sorry, I accidentally closed issue ... <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,859
closed
Memory issues in Transformers
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
06-09-2020 04:48:49
06-09-2020 04:48:49
transformers
4,858
closed
Add support for DeBERTa
# 🌟 New model addition ## Model description DeBERTa (Decoding-enhanced BERT with disentangled attention) is a new model architecture: > In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency of model pre-training and performance of downstream tasks. The paper can be found [here](https://arxiv.org/abs/2006.03654). ## Open source status * [x] the model implementation is available: [GitHub](https://github.com/microsoft/DeBERTa) * [x] the model weights are available: [GitHub release](https://github.com/microsoft/DeBERTa/releases/tag/v0.1) * [ ] who are the authors: @BigBird01
06-09-2020 01:23:46
06-09-2020 01:23:46
Hello, Our code has just been released at [DeBERTa]( https://github.com/microsoft/DeBERTa) Please take a try and your feedback will be good for our improvements, we also welcome the community to work together with us to improve it. We are also glad to integrate DeBERTa into transformers. <|||||>PR Add deberta model #5929 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Unstale - very close to merge!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,857
closed
sentencepiece==0.1.92 causing segmentation fault
# 🐛 Bug ## Information `transformers==2.9.1` `torch==1.4.0` Starting from today, I notice newly released `sentencepiece==0.1.92` causing segmentation fault while calling torch functions. Downgrade to `sentencepiece==0.1.91` solve it.
06-09-2020 00:19:26
06-09-2020 00:19:26
@boy2000-007man Hi, folk. Just curious about how do you find this bug? It costs me almost the whole day... Anyway, thank you so much!<|||||>OMG!!! Awesome Advice!!!! <|||||>I spent a whole night to address the dependencies problems and almost lost my mind. This answer saved my life. Appreciate!<|||||>Thanks for this! Also curious how you worked this out - I've spent a whole day trying to figure this out!<|||||>Thanks so much you saved my day.<|||||>I was dreading the thought of having to dive into this issue with faulthandler and meticulously cross referencing dependencies with a working version....but this post just saved my night. Thanks @boy2000-007man This seems like a new pytorch v1.4.0 incompatibility issue with the latest huggingface releases. I'm assuming this may have been missed due to the focus on v1.5.0 support, but it seems like many people cannot make the jump to cuda 10.2/pytorch 1.5.0 currently, so this seems like a pretty big headache that should be addressed.<|||||>Closing this as solved by #5418<|||||>You are excellent!<|||||>same problem when use sentencepiece==0.1.94<|||||>Having the same problem with sentencepiece==0.1.94<|||||>Cf #8199 we will remove the hard dependency on sentencepiece (replaced by the `tokenizers` library) in a coming release, probably end of next week.<|||||>Thank you a lot! You saved my day!
transformers
4,856
closed
Tensorflow Glue example script for finetuning not usable with DistilBert
# 🐛 Bug ## Information Model I am using: DistilBert ``` python run_glue.py \ --model_name_or_path distilbert-base-cased \ --task_name MRPC\ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/distilbert/ ``` Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) Running the example as above, the script gives me the following error: ``` Traceback (most recent call last): File "run_glue.py", line 229, in <module> main() File "run_glue.py", line 199, in main compute_metrics=compute_metrics, File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/transformers/trainer_tf.py", line 48, in __init__ self._setup_training() File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/transformers/trainer_tf.py", line 58, in _setup_training self._prepare_dataset() File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/transformers/trainer_tf.py", line 95, in _prepare_dataset self.num_train_examples = self.train_dataset.reduce(tf.constant(0), lambda x, _: x + 1).numpy() File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 1934, in reduce output_types=structure.get_flat_tensor_types(state_structure))) File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_dataset_ops.py", line 4661, in reduce_dataset _ops.raise_from_not_ok_status(e, name) File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 6606, in raise_from_not_ok_status six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.InvalidArgumentError: TypeError: `generator` yielded an element that could not be converted to the expected type. The expected type was int32, but the yielded element was None. Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 805, in generator_py_func ret, dtype=dtype.as_numpy_dtype)) File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/ops/script_ops.py", line 196, in _convert result = np.asarray(value, dtype=dtype, order="C") File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/numpy/core/_asarray.py", line 85, in asarray return array(a, dtype, copy=False, order=order) TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/ops/script_ops.py", line 236, in __call__ ret = func(*args) File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 810, in generator_py_func "element was %s." % (dtype.name, ret)), sys.exc_info()[2]) File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/six.py", line 702, in reraise raise value.with_traceback(tb) File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 805, in generator_py_func ret, dtype=dtype.as_numpy_dtype)) File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/ops/script_ops.py", line 196, in _convert result = np.asarray(value, dtype=dtype, order="C") File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/numpy/core/_asarray.py", line 85, in asarray return array(a, dtype, copy=False, order=order) TypeError: `generator` yielded an element that could not be converted to the expected type. The expected type was int32, but the yielded element was None. [[{{node PyFunc}}]] [Op:ReduceDataset] ``` The tasks I am working on is: * [x] an official GLUE/SQUaD task: MRPC * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. clone repo 2. run command from above using `examples/text-classification/run_tf_glue.py` ## Expected behavior Fine tuning works on Distilbert too, not only on Bert. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux-5.3.0-1017-aws-x86_64-with-debian-buster-sid - Python version: 3.6.10 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.1.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
06-08-2020 22:37:01
06-08-2020 22:37:01
Hello! Indeed, it is a bug in the way the TensorFlow dataset is generated. A fix is on its way :)
transformers
4,855
closed
Add XLMRobertaForQuestionAnswering
One of the missing [model task](https://github.com/huggingface/transformers/projects/17).
06-08-2020 22:02:49
06-08-2020 22:02:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=h1) Report > Merging [#4855](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a139d1a1602ee72ca98d5e0412efbd68f746d2c8&el=desc) will **increase** coverage by `2.61%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4855/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4855 +/- ## ========================================== + Coverage 73.93% 76.54% +2.61% ========================================== Files 128 128 Lines 21498 21501 +3 ========================================== + Hits 15894 16458 +564 + Misses 5604 5043 -561 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `73.60% <ø> (ø)` | | | [src/transformers/modeling\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <100.00%> (ø)` | | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.58% <0.00%> (-1.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.28% <0.00%> (+0.15%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.26% <0.00%> (+1.42%)` | :arrow_up: | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.19% <0.00%> (+72.36%)` | :arrow_up: | | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <0.00%> (+78.94%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=footer). Last update [a139d1a...b2b4f9c](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,854
closed
Hans data
This is the first step toward solving #4742: to be able to use the Trainer API, we first need to remove the TensorDataset and have datasets with dict items. This PR addresses that and updates the training and evaluation script accordingly. It takes the multiple choice as a reference implementation, using the same file structure (hence the removal of "hans_processor.py") and implements in "utils_hans.py": - a `HansDataset` and a `TFHansDataset` that implement the logic of the old method `load_and_cache_examples` - a `HansProcessor` (copied from before) - a `hans_convert_examples_to_features` with the same logic as before but using the tokenizer method for padding instead of re-implementing it. Side question: it doesn't look like the `TFMultipleChoiceDataset` I use as a reference for this implementation uses caching, maybe it should be added in the future?
06-08-2020 21:24:49
06-08-2020 21:24:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=h1) Report > Merging [#4854](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ca5e1cdf8e314288bd0242a531815a6c75d8178e&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4854/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4854 +/- ## ======================================= Coverage 77.26% 77.26% ======================================= Files 128 128 Lines 21851 21851 ======================================= Hits 16884 16884 Misses 4967 4967 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=footer). Last update [ca5e1cd...a58291b](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,853
closed
Remove unused arguments in Multiple Choice example
In the dataset preparation, the arguments `pad_token_segment_id`, `pad_on_left`, `pad_token` and `mask_padding_with_zero` are inferred from the tokenizer to be sent to `convert_examples_to_features` which then does not use them (since `tokenizer.encode_plus` does all of this using the tokenizer state). This PR cleans that up (and removes the TODO).
06-08-2020 18:56:42
06-08-2020 18:56:42
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=h1) Report > Merging [#4853](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/37be3786cf1de9d21233f543c231866e68954998&el=desc) will **increase** coverage by `0.14%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4853/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4853 +/- ## ========================================== + Coverage 76.40% 76.54% +0.14% ========================================== Files 128 128 Lines 21533 21533 ========================================== + Hits 16452 16483 +31 + Misses 5081 5050 -31 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.12%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (+10.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=footer). Last update [37be378...9e1b14a](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Also, should the `DataProcessor` in this file simply use the one in `transformers`?<|||||>LGTM
transformers
4,852
closed
issue in pretraining language model with checkpoint
# 🐛 Bug ## Information I am pre-training albert from scratch and it was going fine.(8 v100) But when I am training using a checkpoint its using single gpu and that also 1gb/32gb GPU ram. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> `transformers` version: 2.10.0 launching script with: ``` python transformers/examples/language-modeling/run_language_modeling.py --train_data_file text.txt --output_dir albert_model --model_type albert --mlm --config_name test --tokenizer_name test --do_train --line_by_line --learning_rate 5e-5 --num_train_epochs 3 --save_total_limit 50 --save_steps 5000 --per_gpu_train_batch_size 150 --seed 42 --overwrite_output_dir --max_steps 200000 --fp16 --model_name_or_path albert_model/checkpoint-200000 ``` It seems that its starting it properly with global step but not using gpu properly which really weird ``` /language_model/lm/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:218: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning) Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods. Defaults for this optimization level are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic Processing user overrides (additional kwargs that are not None)... After processing overrides, optimization options are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'",) 06/08/2020 11:25:09 - INFO - transformers.trainer - ***** Running training ***** 06/08/2020 11:25:09 - INFO - transformers.trainer - Num examples = 28236463 06/08/2020 11:25:09 - INFO - transformers.trainer - Num Epochs = 43 06/08/2020 11:25:09 - INFO - transformers.trainer - Instantaneous batch size per device = 150 06/08/2020 11:25:09 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 1200 06/08/2020 11:25:09 - INFO - transformers.trainer - Gradient Accumulation steps = 1 06/08/2020 11:25:09 - INFO - transformers.trainer - Total optimization steps = 1000000 06/08/2020 11:25:09 - INFO - transformers.trainer - Continuing training from checkpoint, will skip to saved global_step 06/08/2020 11:25:09 - INFO - transformers.trainer - Continuing training from epoch 8 06/08/2020 11:25:09 - INFO - transformers.trainer - Continuing training from global step 200000 06/08/2020 11:25:09 - INFO - transformers.trainer - Will skip the first 11752 steps in the first epoch Epoch: 0%| | 0/35 [00:00<?, ?it/s] Iteration: 42%|███████████████████████████████████████████▋ | 9781/23531 [1:20:59<1:53:51, 2.01it/s] ``` can any one suggest something here?
06-08-2020 17:35:34
06-08-2020 17:35:34
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,851
closed
Run a single wandb instance per TPU run
06-08-2020 16:22:04
06-08-2020 16:22:04
Why is it specific to tpu only? Would the same logic apply in all cases?<|||||>Yes I'm guessing we only want the global master to log to wandb in DDP, like we do for Tensorboard. Not sure why we hadn't done it like that before, @borisdayma – thoughts?<|||||>I did not consider DP/DDP at the time. Actually Tensorboard logging does not consider world master either (only for logging config parameters but not metrics). I understand we should wrap the entire `wandb` and Tensorboard logics within a simple `if self.is_world_master`. May I suggest the following: * refactor logging through PR #4756 * add an equivalent `TFTrainer.is_world_master` * wrap relevant Tensorboard & wandb sections of `log_metrics` by checking `is_world_master` * call `setup_wandb` only for world master (checked either within `Trainer` & `TFTrainer` or within `setup_wandb`) Let me know if you want me to add those changes.<|||||>I think those changes would be welcome. Do you agree @julien-c ?<|||||>Yes I agree. Should we already merge this PR though?<|||||>Sure, it won't do any harm.
transformers
4,850
closed
[Benchmark] add tpu and torchscipt for benchmark
This PR adds: - Torchscript memory and time benchmarking - TPU memory and time benchmarking
06-08-2020 15:22:36
06-08-2020 15:22:36
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=h1) Report > Merging [#4850](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42860e92a4a99a8be338644462cfc3f62d1379a3&el=desc) will **increase** coverage by `0.03%`. > The diff coverage is `78.62%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4850/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4850 +/- ## ========================================== + Coverage 76.97% 77.01% +0.03% ========================================== Files 128 128 Lines 21533 21615 +82 ========================================== + Hits 16575 16646 +71 - Misses 4958 4969 +11 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <46.66%> (-0.24%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.09% <50.00%> (-1.19%)` | :arrow_down: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.41% <66.66%> (-0.71%)` | :arrow_down: | | [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `68.85% <69.76%> (+0.16%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `73.09% <96.49%> (+5.85%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `85.36% <100.00%> (-0.35%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=footer). Last update [42860e9...0267668](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>**Baseline GPU results**: ======= INFERENCE - SPEED - RESULT ======= ======= MODEL CHECKPOINT: distilbert-base-uncased ======= distilbert-base-uncased/8/8: 0.007s distilbert-base-uncased/8/32: 0.009s distilbert-base-uncased/8/128: 0.022s distilbert-base-uncased/8/512: 0.1s ======= MODEL CHECKPOINT: bert-base-cased ======= bert-base-cased/8/8: 0.015s bert-base-cased/8/32: 0.025s bert-base-cased/8/128: 0.072s bert-base-cased/8/512: 0.332s ======= INFERENCE - MEMORY - RESULT ======= ======= MODEL CHECKPOINT: distilbert-base-uncased ======= distilbert-base-uncased/8/8: 274 MB distilbert-base-uncased/8/32: 298 MB distilbert-base-uncased/8/128: 324 MB distilbert-base-uncased/8/512: 552 MB ======= MODEL CHECKPOINT: bert-base-cased ======= bert-base-cased/8/8: 458 MB bert-base-cased/8/32: 462 MB bert-base-cased/8/128: 488 MB bert-base-cased/8/512: 728 MB <|||||>**Torchscript GPU results:** ======= INFERENCE - SPEED - RESULT ======= ======= MODEL CHECKPOINT: distilbert-base-uncased ======= distilbert-base-uncased/8/8: 0.005s distilbert-base-uncased/8/32: 0.009s distilbert-base-uncased/8/128: 0.02s distilbert-base-uncased/8/512: 0.096s ======= MODEL CHECKPOINT: bert-base-cased ======= bert-base-cased/8/8: 0.012s bert-base-cased/8/32: 0.025s bert-base-cased/8/128: 0.073s bert-base-cased/8/512: 0.328s ======= INFERENCE - MEMORY - RESULT ======= ======= MODEL CHECKPOINT: distilbert-base-uncased ======= distilbert-base-uncased/8/8: 274 MB distilbert-base-uncased/8/32: 296 MB distilbert-base-uncased/8/128: 312 MB distilbert-base-uncased/8/512: 552 MB ======= MODEL CHECKPOINT: bert-base-cased ======= bert-base-cased/8/8: 458 MB bert-base-cased/8/32: 460 MB bert-base-cased/8/128: 488 MB bert-base-cased/8/512: 716 MB check colab here: https://colab.research.google.com/drive/10KSu_6X6unsKXPOiwiGP6QDC1fLtADFJ?usp=sharing The differences seem very small to me. What do you think @LysandreJik ?<|||||>**TPU memory and time usage** ======= INFERENCE - SPEED - RESULT ======= ======= MODEL CHECKPOINT: distilbert-base-uncased ======= distilbert-base-uncased/8/8: 0.004s distilbert-base-uncased/8/32: 0.005s distilbert-base-uncased/8/128: 0.004s distilbert-base-uncased/8/512: 0.005s ======= MODEL CHECKPOINT: bert-base-cased ======= bert-base-cased/8/8: 0.01s bert-base-cased/8/32: 0.008s bert-base-cased/8/128: 0.009s bert-base-cased/8/512: 0.009s TPU was used for inference. Note that the time after compilation stabilized (after ~10 inferences model.forward(..) calls) was measured. ======= INFERENCE - MEMORY - RESULT ======= ======= MODEL CHECKPOINT: distilbert-base-uncased ======= distilbert-base-uncased/8/8: 1027 MB distilbert-base-uncased/8/32: 1118 MB distilbert-base-uncased/8/128: 1118 MB distilbert-base-uncased/8/512: 1118 MB distilbert-base-uncased/32/512: 1028 MB distilbert-base-uncased/64/512: 1066 MB ======= MODEL CHECKPOINT: bert-base-cased ======= bert-base-cased/8/8: 1330 MB bert-base-cased/8/32: 1332 MB bert-base-cased/8/128: 1332 MB bert-base-cased/8/512: 1332 MB bert-base-cased/32/512: 1314 MB bert-base-cased/64/512: 1334 MB In comparison to the GPU times - this seems reasonable to me. Kind of weird that for longer sequences it takes teh same amount of time or less... At the moment I'm measuring CPU usage for TPU - not at all sure how to measure memory usage correctly for TPU...any ideas @LysandreJik ? UPDATE: Pretty sure that memory usage is wrong for TPU Google colab is here: https://colab.research.google.com/drive/1vp9y7R2bLYTrK8hWOIo8VFHGm6M7ft0B?usp=sharing<|||||>Requesting @julien-c's review for the move of `is_tpu_available` to the utils.<|||||>UPDATE: I'm fine with PyTorch for CPU, GPU with and without torchscript results: https://docs.google.com/spreadsheets/d/1vgAIG7P3AOdBp5X91rVVu8AqnZ_hAvFzKj_fTNolAlU/edit?usp=sharing TPU running times also seem to be fine. TPU memory is not yet implemented - will probably wait here until there is a PyTorch XLA API: https://github.com/pytorch/xla/issues/2180<|||||>Good to merge for me, waiting for @julien-c to check it out<|||||>Good for me<|||||>Okey changed it to `is_torch_tpu_available()`. Think that's fine. Pinging @julien-c @LysandreJik to notice the change.<|||||>Indeed, nice change!
transformers
4,849
closed
Clean documentation
This PR addresses several problems in the documentation: - not all existing models were present, I added them - made sure to always follow the same order of sections/classes as bert for consistency, added an Overview section when not present, moved tips at the end of that overview section if they were elsewhere - fixed a few problems (links not appearing or badly formatted rst) - one example was copy-pasted without adapting the model names, fixed that too Made a list of models missing as I went by, they are tracked in [this project](https://github.com/huggingface/transformers/projects/17).
06-08-2020 14:55:44
06-08-2020 14:55:44
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=h1) Report > Merging [#4849](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e817747941c75c8e14f0e93755ec648269f8a14d&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4849/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4849 +/- ## ======================================= Coverage 76.57% 76.57% ======================================= Files 128 128 Lines 21497 21497 ======================================= Hits 16462 16462 Misses 5035 5035 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `76.00% <ø> (ø)` | | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <ø> (ø)` | | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <ø> (ø)` | | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.69% <ø> (ø)` | | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `27.27% <0.00%> (-64.94%)` | :arrow_down: | | [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `35.71% <0.00%> (-64.29%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `73.60% <0.00%> (-4.81%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.42% <0.00%> (-2.02%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (-0.25%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.04% <0.00%> (-0.16%)` | :arrow_down: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=footer). Last update [e817747...d94e884](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Awesome!
transformers
4,848
closed
TFXLMRobertaForSequenceClassification: call() got an unexpected keyword argument 'labels'
# 🐛 Bug ## Information Model I am using : TFXLMRoberta Language I am using the model on : cross-lingual The problem arises when using: * [ ] the official example scripts: * [√] my own modified scripts: ```python import tensorflow as tf from transformers import TFXLMRobertaForSequenceClassification,XLMRobertaTokenizer,XLMRobertaConfig tokenizer = XLMRobertaTokenizer.from_pretrained('jplu/tf-xlm-roberta-base') model = TFXLMRobertaForSequenceClassification.from_pretrained('jplu/tf-xlm-roberta-base') input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1 labels = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1 outputs = model(input_ids, labels=labels) loss, logits = outputs[:2] ``` and I got error here: ``` File "run_classifier.py", line 180, in train outputs = self.model(input_ids,attention_mask = input_mask, token_type_ids = token_type_ids, labels=labels, training = True) File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 891, in __call__ outputs = self.call(cast_inputs, *args, **kwargs) File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/modeling_tf_roberta.py", line 379, in call outputs = self.roberta(inputs, **kwargs) File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 891, in __call__ outputs = self.call(cast_inputs, *args, **kwargs) TypeError: call() got an unexpected keyword argument 'labels' ``` I read the notes, it says TFXLMRobertaForSequenceClassification class **overrides** TFRobertaForSequenceClassification. And [TFRobertaForSequenceClassification](https://huggingface.co/transformers/model_doc/roberta.html#tfrobertaforsequenceclassification) class's call() method accepts 'labels' argument. For the TFRobertaForSequenceClassification model's example code, I just change the model and tokenizer to XLM, and it got an error. ## Environment info - `transformers` version:2.5.1 - Platform:MacOS Catalina - Python version: 3.5 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.0 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
06-08-2020 14:17:52
06-08-2020 14:17:52
Hi! You're on `transformers` version `v2.5.1`, but the TensorFlow models can only accept labels since this PR https://github.com/huggingface/transformers/pull/4530 was merged, 4 days ago. This currently isn't available in any release, so you will have to install from source to use that feature: ``` pip install git+https://github.com/huggingface/transformers ```<|||||>@LysandreJik thanks for your reminder!
transformers
4,847
closed
Add optimal model size and stopping time feature
# 🚀 Feature request The [calculator](https://huggingface.co/calculator/) blog post presented an automated way to find scaling laws with model size and compute budget on language modeling tasks. Adding it to the library would help save on training costs by picking an optimal model size and training time. ## Motivation Estimating how big of a model to use and how long to train for is more of an art than a science. An automated tool to perform that task would allow researchers and practitioners to concentrate on the the high-level parts of their projects as opposed to parameter tweaking. ## Your contribution I can submit a PR with my existing work, probably integrating it within `Trainer` and/or [`knocknock`](https://github.com/huggingface/knockknock).
06-08-2020 12:53:29
06-08-2020 12:53:29
Great stuff, thank you! The energy estimates look 1000 worse than reality though, V100 running for 12 h should not consume 5432 kWh I think, else we'd be all dead. 5.4 kWh looks more reasonable. <img width="424" alt="Screenshot 2020-06-09 at 00 26 45" src="https://user-images.githubusercontent.com/424613/84082595-c9b7e380-a9e8-11ea-8c64-f221029aa60b.png"> <|||||>> Great stuff, thank you! The energy estimates look 1000 worse than reality though, V100 running for 12 h should not consume 5432 kWh I think, else we'd be all dead. 5.4 kWh looks more reasonable. > > <img alt="Screenshot 2020-06-09 at 00 26 45" width="424" src="https://user-images.githubusercontent.com/424613/84082595-c9b7e380-a9e8-11ea-8c64-f221029aa60b.png"> Ah yes - I remembered having a doubt on that, I checked again the library we used to estimate those and there might have been a unit conversion error, I'll fix that ASAP tomorrow! Edit: it's fixed, thank you @lopuhin !<|||||>This is already looking very promising! Good stuff. When clicking the "initialize in transformers" button, the code block should probably not center-align the code, but left align instead. That makes the code a lot more readable.<|||||>> This is already looking very promising! Good stuff. > > When clicking the "initialize in transformers" button, the code block should probably not center-align the code, but left align instead. That makes the code a lot more readable. Yeah that was a bit of an aesthetic choice to not break the flow of the web page, it definitely wouldn't be like this in a tool rather than a demo! <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>unstale, what's the status on this @TevenLeScao? Should we close?<|||||>@julien-c we had originally decided not to go forward with this, but I started working on it amongst the discussions about the scale of GPT-3. I didn't get to finish it before leaving for holidays two weeks ago, but the PR will be ready this week.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi! The "initialize in Huggingface" button is broken -- is there something I can do locally to solve it? I just wanted the lines of training code for a given wall-clock time.<|||||>Hey! The page seems broken, not sure why, I'll relaunch it<|||||>@TevenLeScao Thanks for the immediate reply! The button to launch in Huggingface Transformers still isn't working, but I'm happy to help debug / send any reports if it helps! Alternatively, do you think you could help me understand what the button does? i'm just hoping to generate the configuration string `n_layers=N_LAYERS,n_ctx=N_CTX`, with the variables filled in by the calculator. Thanks for your time!<|||||>I've relaunched, it should work now (just gotta figure why the page doesn't center on my desktop).<|||||>@TevenLeScao Yes, it works -- thanks! Out of curiosity, why did you use Transformer-XL as opposed to something like GPT-2? Does Transformer-XL reach a lower validation loss on Wikitext-103 as opposed to GPT-2 when training for the same number of steps?<|||||>Yeah, it was the state-of-the-art at the time!
transformers
4,846
closed
Memory issue in Transformers
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Hello everyone, I am using distilBert from hugging face transformers and loading its tokenizer and model in my class. I have already quantised it to reduce the size of the model but still this class is taking nearly 500 MB of the memory though the model is taking only 100 MB. Then I looked into github repo of HuggingFace Transformers they are not using __all__ and __slots__ to their classes and functions, to reduce the memory size of the classes. My question is how do I reduce the memory size while loading any transformer model and why hugging face is not using __all__ and __slots__ in their codebase to make it more efficient. Thank you in advance. <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
06-08-2020 12:42:29
06-08-2020 12:42:29
Do you have a paper/link about those "add" and "slots" that you mention? I have never heard of it.<|||||>> Do you have a paper/link about those "add" and "slots" that you mention? I have never heard of it. Sorry, it was __all__. Please check the documentation link. https://docs.python.org/3/tutorial/modules.html#importing-from-a-package https://docs.python.org/3/reference/datamodel.html#slots<|||||>> Do you have a paper/link about those "add" and "slots" that you mention? I have never heard of it. Some stackoverflow links: https://stackoverflow.com/questions/44834/can-someone-explain-all-in-python https://stackoverflow.com/questions/472000/usage-of-slots https://stackoverflow.com/questions/14118564/how-does-slots-avoid-a-dictionary-lookup/14119024#14119024<|||||>I thought you meant some kind of deep learning optimization. I am curious to see the impact of slots. I guess it could be useful to have a closer look at how much it decreases memory usage. _However_ it seems highly unlikely that you will get the consumption down by a lot. When I read through that top answer, the memory that you save is in the _bytes_, not even kilobytes, let alone hundreds of megabytes. If you want, you can rewrite parts of transformers and benchmark whether you'll find a difference, but I doubt it. Other things that may help: - use evaluation mode and no_grad - trace your model - use something like ONNX to improve inference<|||||>> I thought you meant some kind of deep learning optimization. I am curious to see the impact of slots. I guess it could be useful to have a closer look at how much it decreases memory usage. _However_ it seems highly unlikely that you will get the consumption down by a lot. When I read through that top answer, the memory that you save is in the _bytes_, not even kilobytes, let alone hundreds of megabytes. > > If you want, you can rewrite parts of transformers and benchmark whether you'll find a difference, but I doubt it. > > Other things that may help: > > * use evaluation mode and no_grad > > * trace your model > > * use something like ONNX to improve inference Ok. Thank you.<|||||>> this class is taking nearly 500 MB of the memory though the model is taking only 100 MB @AishwaryaVerma which memory do you mean when you're speaking about 100 MB? Does your model take 100 MB disk space but the loaded one takes 500 MB RAM?<|||||>Sorry for replying yo > > > > this class is taking nearly 500 MB of the memory though the model is taking only 100 MB > > @AishwaryaVerma which memory do you mean when you're speaking about 100 MB? Does your model take 100 MB disk space but the loaded one takes 500 MB RAM? Sorry for replying you late. But I was talking about RAM.
transformers
4,845
closed
[Generate] beam search should generate without replacement
When doing beam search decoding and sampling instead of argmax (edge case, probably very rarely used), we need to sample **without** replacement. This is implemented by default in torch, but not in TF, see https://pytorch.org/docs/master/generated/torch.multinomial.html#torch.multinomial (torch). An easy solution is to use the Gumbel max trick instead: https://github.com/tensorflow/tensorflow/issues/9260 This will fix the sometimes flaky TFBeamSearchGenerate Tests as well: #4447
06-08-2020 11:46:27
06-08-2020 11:46:27
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=h1) Report > Merging [#4845](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f9414f7553d3f1872b372990ef03205c0d1141df&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4845/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4845 +/- ## ========================================== - Coverage 76.06% 76.05% -0.01% ========================================== Files 128 128 Lines 21498 21502 +4 ========================================== + Hits 16352 16354 +2 - Misses 5146 5148 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.12% <100.00%> (-0.24%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=footer). Last update [f9414f7...d2201fe](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,844
closed
Add support for Funnel-Transformer
# 🌟 New model addition ## Model description The recently introduced Funnel-Transformer architecture and models would be a great feature for Transformers: >Funnel-Transformer is a new self-attention model that gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, Funnel-Transformer usually has a higher capacity given the same FLOPs. In addition, with a decoder, Funnel-Transformer is able to recover the token-level deep representation for each token from the reduced hidden sequence, which enables standard pretraining. The paper can be found [here](https://arxiv.org/abs/2006.03236). ## Open source status * [x] the model implementation is available: [official GitHub repo](https://github.com/laiguokun/Funnel-Transformer) * [x] the model weights are available: [Google Cloud Bucket](https://github.com/laiguokun/Funnel-Transformer/blob/master/download_all_ckpts.sh) * [x] who are the authors: @zihangdai and @laiguokun
06-08-2020 08:25:15
06-08-2020 08:25:15
Will start to look into this.<|||||>@sgugger Any updates on this? Thanks! <|||||>The first models are uploaded and the base models are available in PyTorch (`FunnelModel` has encoder + decoder and `FunnelBaseModel` just the encoder, for sequence classification and multiple choice) in [this branch](https://github.com/huggingface/transformers/tree/funnel_transformer). Should have all checkpoints on the HuggingFace S3 and all PyTorch models on the same branch by the end of this week. Note that there might be some changes in the names as this goes under review once it's ready.
transformers
4,843
closed
remove words from vocabulary
Is there any way to remove the words from the vocabulary of pretrained model? and is there a way to see the vocabulary of the model?
06-08-2020 06:28:29
06-08-2020 06:28:29
Please refer to this issue: [https://github.com/huggingface/transformers/issues/4827](https://github.com/huggingface/transformers/issues/4827)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,842
closed
[Benchmark] Add optimization notebook
# 🖥 Benchmarking `transformers` ## Benchmark This notebook is about benchmarking model training with/without dynamic padding optimization. https://github.com/ELS-RD/transformers-notebook **Would it be possible to add it to the [community notebook](https://github.com/huggingface/transformers/tree/master/notebooks) list?** (a link to the Google collab version is provided) @julien-c @patrickvonplaten ## Set-up GPU : Nvidia P100 provided by Google Collab ## Results Using dynamic padding on MNLI provides a **4.7 times training time reduction**, with max pad length set to 512. The effect is strong because few examples are >> 400 tokens in this dataset. IRL, it will depend of the dataset, but it always bring improvement and, after more than 20 experiments listed in this [article](https://towardsdatascience.com/divide-hugging-face-transformers-training-time-by-2-or-more-21bf7129db9q-21bf7129db9e?source=friends_link&sk=10a45a0ace94b3255643d81b6475f409), it seems to not hurt performance.
06-08-2020 05:30:39
06-08-2020 05:30:39
That'd be great! Do you want to open a PR? :-) I would use this line as the github line: https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb and this one as the colab notebook line: https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing <|||||>Haha, I thought you were managing the page (special order of article or whatever)... :-) So the PR is done and waiting for your validation.
transformers
4,841
closed
Multi-output regression support for Transformer models
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation I am trying to build a shipping address to geo-code predictor using RoBERTa. Here the shipping address would be the text input and the output would be a gecode (latitude and longitude). I tried using _robertaforsequenceclassification_ but it mentions that when the final layer consists of more than one class, a cross entropy loss would be used automatically. But I want to perform regression using the RMSE loss. It would be great if we can add the multi-output regression feature to the existing sequence classification pipeline. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. -->
06-08-2020 05:30:02
06-08-2020 05:30:02
Hello! The cross entropy loss is only used if you provide the `labels` for the model to compute the loss. If you don't provide the labels, the model doesn't output any loss, only the logits! You can then use these logits with your labels and the loss of your choosing.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,840
closed
BUG while calculate LM loss in AlbertForMaskedLM
# 🐛 Bug Error Code Here: https://github.com/huggingface/transformers/blob/e33fdc93b4ecb571dd7a8002a74789ec8bfffc09/src/transformers/modeling_albert.py#L822 Here calculate the loss of all tokens in a seq. While MLM actually only calculate loss on [MASK]
06-08-2020 04:00:59
06-08-2020 04:00:59
That's because the labels of all tokens *except* the mask tokens should be set to -100, as it's written in the [documentation](https://huggingface.co/transformers/model_doc/albert.html#albertformaskedlm). Setting these tokens to -100 will result in the cross entropy ignoring them.
transformers
4,839
closed
[Longformer] Remove redundant code
This PR fixes class LongformerSelfAttention as follow: 1. Since the method **_mask_invalid_locations()** was already run in **_sliding_chunks_matmul_qk()**, it should be removed from **forward()** to avoid code duplication. 2. In the method **_mask_invalid_locations()**, since the size of the variable **beginning_mask** is (1, w, 1, w+1) and w is always less than **seqlen**, the index **beginning_mask[:, :seqlen]** is needless.
06-08-2020 02:22:22
06-08-2020 02:22:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=h1) Report > Merging [#4839](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e33fdc93b4ecb571dd7a8002a74789ec8bfffc09&el=desc) will **decrease** coverage by `0.23%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4839/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4839 +/- ## ========================================== - Coverage 76.17% 75.94% -0.24% ========================================== Files 128 128 Lines 21497 21495 -2 ========================================== - Hits 16375 16324 -51 - Misses 5122 5171 +49 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.99% <100.00%> (-0.04%)` | :arrow_down: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `71.83% <0.00%> (-13.93%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (-1.36%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.91% <0.00%> (-0.25%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.36% <0.00%> (+0.31%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=footer). Last update [e33fdc9...6eb5f7d](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>LGTM! All the `RUN_SLOW=1` tests pass. Also pinging @ibeltagy to make sure.<|||||>LGTM<|||||>looks good to me. Thanks, @ZhuBaohe. > w is always less than seqlen Just wanted to mention that this is true only because of the [padding](https://github.com/huggingface/transformers/blob/6eb5f7d3441c2eb768da8f70dc8a602c02468267/src/transformers/modeling_longformer.py#L647).
transformers
4,838
closed
[Bert Model] ValueError: not enough values to unpack (expected 3, got 2)
# 🐛 Bug: ValueError: not enough values to unpack (expected 3, got 2) ## Information I am using Bert initialized with 'bert-base-uncased', as per the [documentation](https://huggingface.co/transformers/model_doc/bert.html), the forward step is suppose to yield 4 outputs: - last_hidden_state - pooler_output - hidden_states - attentions But when I try to intialize BERT and call forward method, it yields only 2 results. Based on the shape, I feel they are the hidden_states and pooler_output. ``` self.bert_model = BertModel.from_pretrained('bert-base-uncased') _, _, hidden_states = self.bert_model(input_ids, attn_masks, token_type_ids) ``` **Error** ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-69-6d2cb1238cab> in <module> 45 for i, data in enumerate(trainloader): 46 input_ids, attn_mask, token_type_ids = data['tokens'], data['attention_mask'], data['token_type_ids'] ---> 47 start_logits, end_logits = model.forward(input_ids, attn_mask, token_type_ids) 48 print(start_logits.shape) 49 print(end_logits.shape) <ipython-input-69-6d2cb1238cab> in forward(self, input_ids, attn_masks, token_type_ids) 23 24 # Feeding the input to BERT model to obtain hidden_states of all the layers ---> 25 _, _, hidden_states = self.bert_model(input_ids, attn_masks, token_type_ids) 26 27 # Shape of hidden_states is (1, 50, 768) ValueError: not enough values to unpack (expected 3, got 2) ``` Model I am using (Bert, XLNet ...): Bert Language I am using the model on English The problem arises when using: * [ ] the official example scripts: NA * [x] my own modified scripts: Below are scripts details. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: NA * [x] my own task or dataset: Fine tuning for my own task. ## To reproduce Steps to reproduce the behavior: 1. Copy paste the full code below in a notebook. 2. Run as is. Complete code: ``` tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Dataset definition class TweetDataset(Dataset): def __init__(self, data, maxlen, tokenizer): self.df = data self.tokenizer = tokenizer self.maxlen = maxlen def __len__(self): return len(self.df) def __getitem__(self, index): """ Returns the token_ids_tensors, attn_mask for the item and text denoting the sentiment. :param index: :return: """ # Selecting the sentence and label at the specified index in the data frame orig_sentence = self.df.iloc[index]['text'] sentiment = self.df.iloc[index]['sentiment'] selected_text = self.df.iloc[index]['selected_text'] # Preprocessing the text to be suitable for BERT # Encode the sentence. Does the following: # 1. Inserting the CLS and SEP token in the beginning and end of the sentence # 2. Generates attention mask # 3. Generate token_type_ids used to differentiate first part of the sentence from the second encoded_dict = self.tokenizer.encode_plus( sentiment, orig_sentence, max_length=self.maxlen, truncation_strategy='only_second', add_special_tokens=True, pad_to_max_length=True, return_tensors='pt', return_token_type_ids=True, return_attention_mask=True ) tokens = encoded_dict['input_ids'][0] token_type_ids = encoded_dict['token_type_ids'][0] attn_mask = encoded_dict['attention_mask'][0] # Determine the beginning and end of the sentence def phrase_start_finder(sentence, phrase): if phrase not in sentence: raise ValueError('s2 not substring of s1') start = sentence.find(phrase) return len(sentence[:start].strip().split(' ')) def phrase_end_finder(sentence, phrase): if phrase not in sentence: raise ValueError('s2 not substring of s1') return phrase_start_finder(sentence, phrase) + len(phrase.strip().split(' ')) - 1 start = phrase_start_finder(orig_sentence, selected_text) end = phrase_end_finder(orig_sentence, selected_text) return { 'tokens': tokens, 'attention_mask': attn_mask, 'token_type_ids': token_type_ids, 'start': float(start), 'end': float(end), 'sentence': orig_sentence, 'selected_text': selected_text, 'sentiment': sentiment } # Defining the loader dataset = TweetDataset(train_data, 50, tokenizer) trainloader = DataLoader( dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4 ) # Defining the model class TweetModel(nn.Module): def __init__(self, freeze_bert=True): super(TweetModel, self).__init__() # Instantiating BERT model object self.bert_model = BertModel.from_pretrained('bert-base-uncased') # TODO(Viman): Before training on GPUs and finalization, remove this # Freeze bert layers # In first experiment, not training the previous layers if freeze_bert: for p in self.bert_model.parameters(): p.requires_grad = False # Final layer. Needs two outputs which are supposed to be logits: startIndex and endIndex self.dropout = nn.Dropout(0.2) # 768 because output is a vector of size 768 (Dimensionality of the encoder layer) self.fc = nn.Linear(768, 2) # Intialize the fc layer nn.init.normal_(self.fc.weight, std=0.02) nn.init.normal_(self.fc.bias, 0) def forward(self, input_ids, attn_masks, token_type_ids): # Feeding the input to BERT model to obtain hidden_states of all the layers _, _, hidden_states = self.bert_model(input_ids, attn_masks, token_type_ids) # Shape of hidden_states is (1, 50, 768) # TODO(Viman): Try mean as opposed to max # hidden_states, _ = torch.max(hidden_states, dim=1) # last_hidden_state = hidden_states[-1] print(hidden_states.shape) X = self.dropout(hidden_states) logits = self.fc(X) start_logits, end_logits = logits.split(1, dim=-1) start_logits = start_logits.squeeze(-1) end_logits = end_logits.squeeze(-1) return start_logits, end_logits model = TweetModel() # Testing the model forward implementation for i, data in enumerate(trainloader): input_ids, attn_mask, token_type_ids = data['tokens'], data['attention_mask'], data['token_type_ids'] start_logits, end_logits = model.forward(input_ids, attn_mask, token_type_ids) print(start_logits.shape) print(end_logits.shape) if i == 1: break ``` ## Expected behavior The self.bert_model(input_ids, attn_masks, token_type_ids) line should return the a tuple containing 4 elements however it seems to return 2 only. ## Environment info - `transformers` version: 2.9.0 - Platform: Linux-4.19.112+-x86_64-with-debian-buster-sid - Python version: 3.7.6 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.1.0 (False) - Using GPU in script?: Not yet - Using distributed or parallel set-up in script?: No - `transformers` version: 2.11.0 - Platform: Mac/Kaggle notebook (Tried in both) - Python version: 3.7 - PyTorch version (GPU?): No - Tensorflow version (GPU?): NA - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
06-07-2020 21:07:33
06-07-2020 21:07:33
From the first sight it seems to me you did not specify in your `config` file you want to output the hidden states. You may use these two lines of code: ``` config = BertConfig.from_pretrained( 'bert-base-uncased', output_hidden_states=True) self.bert_model = BertModel.from_pretrained('bert-base-uncased', config=config) ``` P.S. good luck with the Tweet Sentiment competition! :)<|||||>Oh, didn't know that `output_hidden_states=True` is needed to return the hidden states. Going to try it tonight. Might be good to modify the Transformers documentation for `forward` to reflect that too. Lot of lazy individuals like me may skip reading the config docs, use defaults and proceed to model docs. Thanks for the wishes! Very little time but hoping to make a submission. Are you participating?<|||||>To tell the truth it is written straightforward in docs: `hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True)` :)<|||||>I knew I was lazy, but would not finish reading a line all the way, I would surely blame it to aging. This is not a bug at all then but thanks a lot for being patient and still answering! Closing it now.<|||||>Hi viiids, how did you manage to overcome this problem? I am having the same one and not being able to solve it so far. Many thanks!<|||||>Hi AKtsvigun, I have tried the solution suggested by you but the issue still persist. Can anybody share how did they manage to solve this. thanks.<|||||>Hi, you can now access to hidden states via the dot (in case you did not forget to set `output_hidden_states=True` either in config or when calling `forward` method): `hidden_states = model(...).hidden_states` > Hi AKtsvigun, I have tried the solution suggested by you but the issue still persist. Can anybody share how did they manage to solve this. thanks. <|||||>I set output_hidden_states=True in the forward method, however, the same error keep showing. I restarted the kernel and doubled checked the rest of the code. not sure if it’s related to some other parameter in the training. here is my forward pass: > def forward(self, > input_ids: torch.tensor, # Indices of input sequence tokens in the vocabulary. > attention_mask: torch.tensor, # Mask to avoid performing attention on padding token indices. > # Mask values selected in [0, 1]: 1 for tokens 0 for non-tokens [PAD] > token_type_ids: torch.tensor,# Indices to indicate first and second portions of the inputs. > # 0 sentence A token and 1 sentence B token > # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP] > intent_labels: torch.tensor = None,# The labels of the Intent classifier > > slot_labels: torch.tensor = None # The labels for the slot tagging [NER] > > ): > > # Feeding the input to BERT model to obtain hidden_states of all the layers > last_hidden_states, pooler_output = self.bert_model(input_ids=input_ids, > attention_mask=attention_mask, > token_type_ids=token_type_ids, > output_hidden_states=True, > return_dict=False) > # 7. Define huggingface model > dropout = 0.2 > num_intent_labels = len(intent_vocab) > num_slot_labels = len(slot_vocab) > > model = ParserModel(model_name_or_path='bert-base-uncased', > dropout=dropout, > num_intent_labels=num_intent_labels, > num_slot_labels=num_slot_labels, > > ) **And here is is the training code where the issue occurs:** > outputs = model(input_ids=input_ids, > attention_mask=attention_mask, > token_type_ids=token_type_ids, > slot_labels=slot_labels, > intent_labels=intent_labels) > > > --------------------------------------------------------------------------- > ValueError Traceback (most recent call last) > <ipython-input-54-3d510ec5d296> in <module>() > 31 token_type_ids=token_type_ids, > 32 slot_labels=slot_labels, > ---> 33 intent_labels=intent_labels) > 34 slot_loss, intent_loss = outputs[2],outputs[3] > 35 slot_loss.backward(retain_graph=True) #need to retain_graph when working with multiple losses > > 3 frames > /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) > 923 elif input_ids is not None: > 924 input_shape = input_ids.size() > --> 925 batch_size, seq_length = input_shape > 926 elif inputs_embeds is not None: > 927 input_shape = inputs_embeds.size()[:-1] > > ValueError: not enough values to unpack (expected 2, got 1) **Are you suspecting other places of the code to be the issue?**<|||||>@ENGSamShamsan > last_hidden_states, pooler_output = self.bert_model(..) The first element of the output is `loss`, the next is `logits` and only then come the hidden states. You need to make it `loss, logits, last_hidden_states, pooler_output = self.bert_model(...)`<|||||>@Aktsvigun Thank you so much for the quick respond. I applied the four variables previously and once more after you your last post but still the same error persist. this error took more time than I expect lol<|||||>I have solved the same issue with u but in a different situation. You should double-check the batch size of your input data. `tokens_ids_tensor` and `attn_mask` should be a 2d tensor but not 1d. While batch size is 1, they should look like: ``` tensor([[ 101, 1030, 1054, 2595, 2015, 21486, 2620, 1030, 3841, 7377, 8197, 3217, 1030, 1054, 2595, 2015, 21486, 2620, 1024, 1030, 3841, 7377, 8197, 3217, 1001, 15333, 6342, 2483, 9103, 102]], device='cuda:0') tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:0') ``` but not ``` tensor([ 101, 1030, 1054, 2595, 2015, 21486, 2620, 1030, 3841, 7377, 8197, 3217, 1030, 1054, 2595, 2015, 21486, 2620, 1024, 1030, 3841, 7377, 8197, 3217, 1001, 15333, 6342, 2483, 9103, 102], device='cuda:0') tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], device='cuda:0') ``` Further, for *n* batch size, they should look like: ``` seq is tensor([[ 101, 4911, 1024, ..., 0, 0, 0], [ 101, 2054, 2057, ..., 2860, 28400, 102], [ 101, 7409, 2000, ..., 1037, 19062, 102], ..., [ 101, 1001, 2446, ..., 1024, 1013, 102], [ 101, 1001, 1037, ..., 2522, 1013, 102], [ 101, 1001, 4918, ..., 1013, 1017, 102]], device='cuda:0') attn_masks is tensor([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], ..., [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1]], device='cuda:0') ```<|||||>> I have solved the same issue with u but in a different situation. You should double-check the batch size of your input data. > > `tokens_ids_tensor` and `attn_mask` should be a 2d tensor but not 1d. > While batch size is 1, they should look like: > > ``` > tensor([[ 101, 1030, 1054, 2595, 2015, 21486, 2620, 1030, 3841, 7377, > 8197, 3217, 1030, 1054, 2595, 2015, 21486, 2620, 1024, 1030, > 3841, 7377, 8197, 3217, 1001, 15333, 6342, 2483, 9103, 102]], > device='cuda:0') > tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 1, 1, 1, 1, 1, 1]], device='cuda:0') > ``` > > but not > > ``` > tensor([ 101, 1030, 1054, 2595, 2015, 21486, 2620, 1030, 3841, 7377, > 8197, 3217, 1030, 1054, 2595, 2015, 21486, 2620, 1024, 1030, > 3841, 7377, 8197, 3217, 1001, 15333, 6342, 2483, 9103, 102], > device='cuda:0') > tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 1, 1, 1, 1, 1, 1], device='cuda:0') > ``` > > Further, for _n_ batch size, they should look like: > > ``` > seq is tensor([[ 101, 4911, 1024, ..., 0, 0, 0], > [ 101, 2054, 2057, ..., 2860, 28400, 102], > [ 101, 7409, 2000, ..., 1037, 19062, 102], > ..., > [ 101, 1001, 2446, ..., 1024, 1013, 102], > [ 101, 1001, 1037, ..., 2522, 1013, 102], > [ 101, 1001, 4918, ..., 1013, 1017, 102]], device='cuda:0') > attn_masks is tensor([[1, 1, 1, ..., 0, 0, 0], > [1, 1, 1, ..., 1, 1, 1], > [1, 1, 1, ..., 1, 1, 1], > ..., > [1, 1, 1, ..., 1, 1, 1], > [1, 1, 1, ..., 1, 1, 1], > [1, 1, 1, ..., 1, 1, 1]], device='cuda:0') > ``` .unsqueeze(0) will do the job<|||||>Hi everyone, I am having the same issue in the forward function but with the distill bert uncased model. Can anybody help me with that ``` 12 def forward(self, ids, mask): ---> 13 _, output_1= self.l1(ids, attention_mask = mask) 14 output_2 = self.l2(output_1) 15 output = `self.l3(output_2)` ValueError: not enough values to unpack (expected 2, got 1)`` ``` Also distill bert does not encode token_type_ids so I set them as False. May be this is the issue.<|||||>@milind29 seems like your model returns only the output values: at least the mistake says `self.l1(ids, attention_mask = mask)` outputs precisely one variable, and you try to expose it into two variables.
transformers
4,837
closed
[examples] consolidate summarization examples
Consolidating summarization examples of T5 & Bertabs models into one - [#3826](https://github.com/huggingface/transformers/issues/3826) @sshleifer
06-07-2020 18:54:34
06-07-2020 18:54:34
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=h1) Report > Merging [#4837](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e33fdc93b4ecb571dd7a8002a74789ec8bfffc09&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4837/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4837 +/- ## ======================================= Coverage 76.17% 76.18% ======================================= Files 128 128 Lines 21497 21497 ======================================= + Hits 16375 16377 +2 + Misses 5122 5120 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.27% <0.00%> (+0.12%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=footer). Last update [e33fdc9...ed4de25](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>That's really cool, thanks for working on this @aretius!<|||||>@sshleifer Thanks for approving the PR! Also, I am really interested to contribute more. It would be really great for me to be offered more chance to contribute to the repo :)
transformers
4,836
closed
Remove unneeded call convert_ids_to_tokens.
On the base `PreTrainedTokenizer` object, calling `convert_tokens_to_string` returns an error because the method tries to call `convert_ids_to_tokens.` however the input should already be a list of tokens not ids. I noticed this when creating a tokenizer that inherits from the Pretrained Tokenizer, and trying to test it with: ```python assert transformers_tokenizer.decode([1, 15, 22, 15, 2]) == ">MVM<" ``` which eventually results in: ```python self = <gcgc.third_party.GCGCTransformersTokenizer object at 0x16539ead0>, ids = ['>', 'M', 'V', 'M', '<'], skip_special_tokens = False def convert_ids_to_tokens( self, ids: Union[int, List[int]], skip_special_tokens: bool = False ) -> Union[int, List[int]]: """ Converts a single index or a sequence of indices (integers) in a token " (resp.) a sequence of tokens (str), using the vocabulary and added tokens. Args: skip_special_tokens: Don't decode special tokens (self.all_special_tokens). Default: False """ if isinstance(ids, int): if ids in self.added_tokens_decoder: return self.added_tokens_decoder[ids] else: return self._convert_id_to_token(ids) tokens = [] for index in ids: > index = int(index) E ValueError: invalid literal for int() with base 10: '>' ``` But I don't think this should get called in the first place? I think I'm actually go override this now that I understand the process better, but this still seemed incorrect.
06-07-2020 18:21:02
06-07-2020 18:21:02
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=h1) Report > Merging [#4836](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac921f0385616be40adbbd5302d7f58d5c976ca8&el=desc) will **decrease** coverage by `1.12%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4836/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4836 +/- ## ========================================== - Coverage 78.45% 77.32% -1.13% ========================================== Files 146 146 Lines 26047 26047 ========================================== - Hits 20434 20142 -292 - Misses 5613 5905 +292 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.36% <100.00%> (+0.40%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=footer). Last update [ac921f0...3284308](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,835
closed
Any reason why BART does not have a ForTokenClassification variant?
Are there any theoretical constraints to create a ForTokenClassification variant for BART? In my current project I am using a sequence classification head + a token classification head, so I would like to implement the token classification part manually. However, since it is not implemented in the repository, I wonder if there are any particular reasons why one should not do this.
06-07-2020 17:10:36
06-07-2020 17:10:36
No theoretical reasons, go for it!
transformers
4,834
closed
Why init specific layers rather than whole model in BART
In BartForSequenceClassification I can see that rather than calling `self.init_weights()` (which most other models use) only specifically the classification head is initialized. https://github.com/huggingface/transformers/blob/c58e6c129a153ca1a5021e5d7e642d00bf011e20/src/transformers/modeling_bart.py#L1046-L1047 Is there any advantage of doing this for the head(s) only rather than for the whole model? I can think of a speed improved, but apart from that I'm not sure.
06-07-2020 17:09:12
06-07-2020 17:09:12
The line ```python self.model = BartModel(config) ``` already calls `init_weights`, so no need to run it twice.<|||||>Ah, you are of course correct. I noticed this because the other models (XXXForXXX) seem to just re-call init_weights, e.g. https://github.com/huggingface/transformers/blob/e33fdc93b4ecb571dd7a8002a74789ec8bfffc09/src/transformers/modeling_bert.py#L1008-L1015 But I guess that in practice it does not matter, it'll just be a bit slower but the result will be the same. Thanks!
transformers
4,833
closed
GlossBert adding
Here is a link https://github.com/HSLCY/GlossBERT - it should be able to do better for word vectors representations and there is already pretrained model than maybe need to be converted to a different format.
06-07-2020 13:11:02
06-07-2020 13:11:02
Actually it already can be loaded and seem to work in some cases,a ll i tested now. model_a = BertModel.from_pretrained("/folder/") tokenizer_a = BertTokenizer.from_pretrained("/folder/")<|||||>It can be downloaded from the link.<|||||>As you say, it is already available for download in their repository. If you want them to add their model to the transformers model hub, you can open an issue on their GitHub repo and ask them to add the model [here](https://huggingface.co/models?search=glossbert).
transformers
4,832
closed
Why exclude LayerNorm.bias from weight decay when finetuning?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L306 In the original BERT implementation and in earlier versions of this repo, both LayerNorm.weight and LayerNorm.bias are decayed. <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
06-07-2020 11:24:50
06-07-2020 11:24:50
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I have the same question<|||||>Check this [discussion](https://forums.fast.ai/t/is-weight-decay-applied-to-the-bias-term/73212/6)
transformers
4,831
closed
TF Checkpoints
Align how the checkpoints are managed the same way than in the PyTorch trainer.
06-07-2020 10:44:48
06-07-2020 10:44:48
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=h1) Report > Merging [#4831](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c58e6c129a153ca1a5021e5d7e642d00bf011e20&el=desc) will **increase** coverage by `1.63%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4831/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4831 +/- ## ========================================== + Coverage 74.52% 76.16% +1.63% ========================================== Files 128 128 Lines 21497 21495 -2 ========================================== + Hits 16021 16372 +351 + Misses 5476 5123 -353 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `19.04% <0.00%> (+0.17%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (-1.36%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (-0.16%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.42% <0.00%> (+0.18%)` | :arrow_up: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.26% <0.00%> (+6.29%)` | :arrow_up: | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <0.00%> (+75.48%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=footer). Last update [c58e6c1...78f2040](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,830
closed
Add diagnostic dataset of glue tasks for prediction
As diagnostic dataset didn't have train set, it's very common in current research work to use model fine tuned on MNLI task to conduct prediction of diagnostic dataset. As experimented, by adding logic in this request, we can achieve 47.1% on diagnostic dataset after fine-tuning 10k steps on MNLI dataset.
06-07-2020 07:23:07
06-07-2020 07:23:07
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,829
closed
[examples] Add trainer support for question-answering
This PR adds trainer support for question-answering task. Regarding issue #4784 **TODOs** - [ ] Add automatic data loading. Right now it requires the user to specify data directory. Decided not to use `tfds` because I think it will be soon replaced by `nlp` here - [ ] Add evaluation - [ ] Test all models. @julien-c @patrickvonplaten
06-07-2020 04:57:50
06-07-2020 04:57:50
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=h1) Report > Merging [#4829](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2a93991158f15993eba9ab421d82766b892f948&el=desc) will **increase** coverage by `1.00%`. > The diff coverage is `49.41%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4829/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4829 +/- ## ========================================== + Coverage 76.84% 77.84% +1.00% ========================================== Files 141 142 +1 Lines 24685 24768 +83 ========================================== + Hits 18969 19281 +312 + Misses 5716 5487 -229 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/datasets/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL3NxdWFkLnB5) | `47.56% <47.56%> (ø)` | | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.22% <100.00%> (ø)` | | | [src/transformers/data/datasets/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `73.37% <0.00%> (-25.00%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.00%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.72% <0.00%> (+73.10%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=footer). Last update [d2a9399...5497ae6](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi @patil-suraj, I think @julien-c can answer questions regarding the Trainer better :-) <|||||>Just in case you wanted to use Weights & Biases, you should just have to do a `pip install wandb` and it should automatically track everything.<|||||>>My main question is, have you reproduced the training results that are documented in https://github.com/huggingface/transformers/blob/master/examples/question-answering/README.md ? I didn't train `bert-base` (just trained for 1 epoch to see if the implementation was working) but instead I used it to train `electra-base` and it gave better results than mentioned in the paper In the paper the authors mentioned that electra-base achieves 84.5 EM and 90.8 F1. I was able to achieve 85.05 EM and 91.60 F1. Sadly didn't use wandb, you can find the colab [here](https://colab.research.google.com/drive/11yo-LaFsgggwmDSy2P8zD3tzf5cCb-DU?usp=sharing) It uses the same code, just copy pasted in colab. But if required I can try to reproduce the documented results.<|||||>> I didn't train `bert-base` (just trained for 1 epoch to see if the implementation was working) I can do it tomorrow morning, I currently have a V100 on hand:)<|||||>Just a note that I tried `python run_squad_trainer.py --model_name_or_path bert-base-uncased --model_type bert --data_dir squad --output_dir /tmp/debug_squad/ --overwrite_output_dir --do_train --do_eval --evaluate_during_training --logging_steps 100`. For some reason I don't get any evaluation metric during training (I was expecting `loss` or `eval_loss`).<|||||>> Just in case you wanted to use Weights & Biases, you should just have to do a `pip install wandb` and it should automatically track everything. @borisdayma yes, there are no start and end positions in eval dataset which is why eval loss is not calculated. I will add that. Were you able to see training loss ? Thanks !<|||||>> yes, there are no start and end positions in eval dataset which is why eval loss is not calculated. I will add that. Were you able to see training loss ? Hmm, I'm pretty sure the dev-v1.1.json file has the same labels as the training one (start positions). Otherwise we wouldn't have any eval results at all in the readme. No? pinging @LysandreJik on this:)<|||||>> @borisdayma yes, there are no start and end positions in eval dataset which is why eval loss is not calculated. I will add that. Were you able to see training loss ? Yes, training loss was logged.<|||||>@julien-c In the two `TensorDatasets` created (one for training and one for evaluation), only the training has the correct `start_position` and `end_position`. I believe this is because while the training dataset only has one possible answer per question, the dev and validation datasets both have multiple answers per question (usually different-lengths spans).<|||||>@LysandreJik So I guess we should update the eval dataset to pick one start_position (or the most frequent one) – how do people do it usually with SQuAD eval, do you know @thomwolf? Maybe this can be done in a second PR though. Everyone ok with merging this (renaming `run_squad_trainer.py` to `run_squad.py`)?<|||||>@patil-suraj Can you resolve the conflicts and switch to the new `default_data_collator` now that it should work for your dict inputs? I can take over if you don't have time, but this is the only thing standing in the way of merging this PR.<|||||>@sgugger Yes, I'll switch to the new data collator. <|||||>Hi @sgugger, you can take this over, I'm running short on time ;(<|||||>Thanks @sgugger :)<|||||>@sgugger can you please rename `run_squad_trainer.py` to `run_squad.py`? see also #5547
transformers
4,828
closed
`run_glue.py` fails with models `bert-base-cased`, `distil-bert-cased`, others
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): `bert-base-cased` Language I am using the model on (English, Chinese ...): `English` The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. download GLUE data 2. run `python run_glue.py --model_name_or_path bert-base-cased` Error message: ``` 06/07/2020 00:01:06 - INFO - transformers.trainer - ***** Running training ***** 06/07/2020 00:01:06 - INFO - transformers.trainer - Num examples = 3668 06/07/2020 00:01:06 - INFO - transformers.trainer - Num Epochs = 3 06/07/2020 00:01:06 - INFO - transformers.trainer - Instantaneous batch size per device = 4 06/07/2020 00:01:06 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 4 06/07/2020 00:01:06 - INFO - transformers.trainer - Gradient Accumulation steps = 1 06/07/2020 00:01:06 - INFO - transformers.trainer - Total optimization steps = 2751 Iteration: 1%|█▊ | 9/917 [00:24<40:42, 2.69s/it] Epoch: 0%| | 0/3 [00:24<?, ?it/s] Traceback (most recent call last): File "./run_glue.py", line 247, in <module> main() File "./run_glue.py", line 174, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py", line 471, in train tr_loss += self._training_step(model, inputs, optimizer) File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py", line 571, in _training_step outputs = model(**inputs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bert.py", line 1143, in forward inputs_embeds=inputs_embeds, File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bert.py", line 727, in forward input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bert.py", line 174, in forward inputs_embeds = self.word_embeddings(input_ids) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py", line 1484, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range: Tried to a ccess index 29597 out of table with 28995 rows. at /opt/conda/conda-bld/pytorch_1579040055865/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418 ``` ## Expected behavior Model successfully trains. The script works well on my machine for many other models, including `bert-base-uncased` ~~and `distilbert-base-cased`~~. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> ``` - `transformers` version: 2.11.0 - Platform: Linux-3.10.0-693.el7.x86_64-x86_64-with-centos-7.4.1708-Core - Python version: 3.7.7 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.0.0 (True) - Using GPU in script?: [yes] - Using distributed or parallel set-up in script?: [yes, parallel, but I received the same error when I ran the script just using the CPU] ```s
06-07-2020 04:06:07
06-07-2020 04:06:07
update: this also happens with `distil-bert-cased` for `RTE` and `WNLI` tasks: ``` ds) # (bs, seq_length, dim) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_distilbert.py", line 91, in forward word_embeddings = self.word_embeddings(input_ids) # (bs, max_seq_length, dim) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py", line 1484, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range: Tried to access index 29236 out of table with 28995 rows. at /opt/conda/conda-bld/pytorch_1579040055865/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418 ```<|||||>update: a different error occurs with `roberta-base` on `STS-B`: ``` 06/06/2020 23:49:26 - INFO - transformers.trainer - ***** Running training ***** 06/06/2020 23:49:26 - INFO - transformers.trainer - Num examples = 5749 06/06/2020 23:49:26 - INFO - transformers.trainer - Num Epochs = 3 06/06/2020 23:49:26 - INFO - transformers.trainer - Instantaneous batch size per device = 4 06/06/2020 23:49:26 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16 06/06/2020 23:49:26 - INFO - transformers.trainer - Gradient Accumulation steps = 1 06/06/2020 23:49:26 - INFO - transformers.trainer - Total optimization steps = 1080 Iteration: 0%| | 0/360 [00:09<?, ?it/s] Epoch: 0%| | 0/3 [00:09<?, ?it/s] wandb: Waiting for W&B process to finish, PID 7670 Traceback (most recent call last): File "./run_glue.py", line 246, in <module> main() File "./run_glue.py", line 173, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py", line 471, in train tr_loss += self._training_step(model, inputs, optimizer) File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py", line 571, in _training_step outputs = model(**inputs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply wandb: Program failed with code 1. Press ctrl-c to abort syncing. return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/_utils.py", line 394, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bart.py", line 1103, in forward loss = F.cross_entropy(logits.view(-1, self.config.num_labels), labels.view(-1)) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py", line 2021, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py", line 1838, in nll_loss ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward ```<|||||>Hello! I tried to reproduce, but couldn't get any results to crash. Do you mind showing me the exact command you use to launch the script?<|||||>Thanks for the response @LysandreJik -- here's an example of one command: ``` CUDA_VISIBLE_DEVICES="" python run_glue.py --model_name_or_path bert-base-cased --tokenizer_name bert-base-cased --task_name MRPC --do_train --do_eval --save_steps -1 --data_dir=./glue_data/MRPC/ --max_seq_length 256 --per_device_eval_batch_size=16 --per_device_train_batch_size=16 --learning_rate 2e-5 --num_train_epochs 3 --output_dir=./glue_models/bert-base-cased/MRPC/ ``` Here's the full output: ``` 06/11/2020 13:19:37 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt from cache at /u/.cache/torch/transformers/5e8a2b4893d13790ed4150ca1906be5f7a03d6c4ddf62296c383f6db42814db2.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1 Made tokenizer: <transformers.tokenization_bert.BertTokenizer object at 0x7fef6cd09550> 06/11/2020 13:19:38 - INFO - transformers.modeling_utils - loading weights file https://cdn.huggingface.co/bert-base-cased-pytorch_model.bin from cache at /u/.cache/torch/transformers/d8f11f061e407be64c4d5d7867ee61d1465263e24085cfa26abf183fdc830569.3fadbea36527ae472139fe84cddaa65454d7429f12d543d80bfc3ad70de55ac2 06/11/2020 13:19:41 - INFO - transformers.modeling_utils - Weights of BertForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias'] 06/11/2020 13:19:41 - INFO - transformers.modeling_utils - Weights from pretrained model not used in BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] 06/11/2020 13:19:41 - INFO - filelock - Lock 140666173336656 acquired on ./glue_data/MRPC/cached_train_BertTokenizer_256_mrpc.lock 06/11/2020 13:19:41 - INFO - transformers.data.datasets.glue - Loading features from cached file ./glue_data/MRPC/cached_train_BertTokenizer_256_mrpc [took 0.110 s] 06/11/2020 13:19:41 - INFO - filelock - Lock 140666173336656 released on ./glue_data/MRPC/cached_train_BertTokenizer_256_mrpc.lock 06/11/2020 13:19:41 - INFO - filelock - Lock 140666173337552 acquired on ./glue_data/MRPC/cached_dev_BertTokenizer_256_mrpc.lock 06/11/2020 13:19:41 - INFO - transformers.data.datasets.glue - Loading features from cached file ./glue_data/MRPC/cached_dev_BertTokenizer_256_mrpc [took 0.013 s] 06/11/2020 13:19:41 - INFO - filelock - Lock 140666173337552 released on ./glue_data/MRPC/cached_dev_BertTokenizer_256_mrpc.lock 06/11/2020 13:19:41 - INFO - transformers.trainer - Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true" wandb: Tracking run with wandb version 0.8.36 wandb: Wandb version 0.9.1 is available! To upgrade, please run: wandb: $ pip install wandb --upgrade wandb: Run data is saved locally in wandb/run-20200611_171941-90an9vn0 wandb: Syncing run solar-sun-89 wandb: ⭐️ View project at https://app.wandb.ai/jxmorris12/huggingface wandb: 🚀 View run at https://app.wandb.ai/jxmorris12/huggingface/runs/90an9vn0 wandb: Run `wandb off` to turn off syncing. 06/11/2020 13:19:43 - INFO - transformers.trainer - ***** Running training ***** 06/11/2020 13:19:43 - INFO - transformers.trainer - Num examples = 3668 06/11/2020 13:19:43 - INFO - transformers.trainer - Num Epochs = 3 06/11/2020 13:19:43 - INFO - transformers.trainer - Instantaneous batch size per device = 16 06/11/2020 13:19:43 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16 06/11/2020 13:19:43 - INFO - transformers.trainer - Gradient Accumulation steps = 1 06/11/2020 13:19:43 - INFO - transformers.trainer - Total optimization steps = 690 Iteration: 1%|█▍ | 2/230 [00:16<31:42, 8.35s/it] Epoch: 0%| | 0/3 [00:16<?, ?it/s] Traceback (most recent call last): File "run_glue.py", line 247, in <module> main() File "run_glue.py", line 174, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py", line 471, in train tr_loss += self._training_step(model, inputs, optimizer) File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py", line 571, in _training_step outputs = model(**inputs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bert.py", line 1143, in forward inputs_embeds=inputs_embeds, File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bert.py", line 727, in forward input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bert.py", line 174, in forward inputs_embeds = self.word_embeddings(input_ids) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py", line 1484, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range: Tried to acc wandb: Waiting for W&B process to finish, PID 46776 ess index 29597 out of table with 28995 rows. at /opt/conda/conda-bld/pytorch_1579040055865/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418 wandb: Program failed with code 1. Press ctrl-c to abort syncing. wandb: Process crashed early, not syncing files ``` But -- at least on my machine -- I think a host of combinations of model/task combinations (`--model_name_or_path` and `--data_dir`) fail, as I mentioned above.<|||||>I had the same problem, but on my own data set. Have you solved your problem?<|||||>Nope @SizhaoXu.<|||||>@LysandreJik -- have you had a chance to check this out again? Thanks.<|||||>> Nope @SizhaoXu. For question: RuntimeError: index out of range: Tried to access index 29597 out of table with 28995 rows. The reason for this question is that the maximum sequence length of the model is 512. <|||||>> > Nope @SizhaoXu. > > For question: RuntimeError: index out of range: Tried to access index 29597 out of table with 28995 rows. > The reason for this question is that the maximum sequence length of the model is 512. This solves my problem. I hope that will help you<|||||>@SizhaoXu how did you fix it then? by truncating the inputs?<|||||>> @SizhaoXu how did you fix it then? by truncating the inputs? yes! you can try it. The maximum sequence length I set is 512 and I keep the first 200 words, the last 200 words and the middle 112 words<|||||>Thanks @SizhaoXu but my problems go beyond that specific one.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> update: a different error occurs with `roberta-base` on `STS-B`: > > ``` > 06/06/2020 23:49:26 - INFO - transformers.trainer - ***** Running training ***** > 06/06/2020 23:49:26 - INFO - transformers.trainer - Num examples = 5749 > 06/06/2020 23:49:26 - INFO - transformers.trainer - Num Epochs = 3 > 06/06/2020 23:49:26 - INFO - transformers.trainer - Instantaneous batch size per device = 4 > 06/06/2020 23:49:26 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16 > 06/06/2020 23:49:26 - INFO - transformers.trainer - Gradient Accumulation steps = 1 > 06/06/2020 23:49:26 - INFO - transformers.trainer - Total optimization steps = 1080 > Iteration: 0%| | 0/360 [00:09<?, ?it/s] > Epoch: 0%| | 0/3 [00:09<?, ?it/s] > > wandb: Waiting for W&B process to finish, PID 7670 > Traceback (most recent call last): > File "./run_glue.py", line 246, in <module> > main() > File "./run_glue.py", line 173, in main > model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None > File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py", line 471, in train > tr_loss += self._training_step(model, inputs, optimizer) > File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py", line 571, in _training_step > outputs = model(**inputs) > File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ > result = self.forward(*input, **kwargs) > File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward > outputs = self.parallel_apply(replicas, inputs, kwargs) > File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply > wandb: Program failed with code 1. Press ctrl-c to abort syncing. > return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) > File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply > output.reraise() > File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/_utils.py", line 394, in reraise > raise self.exc_type(msg) > RuntimeError: Caught RuntimeError in replica 0 on device 0. > Original Traceback (most recent call last): > File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker > output = module(*input, **kwargs) > File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ > result = self.forward(*input, **kwargs) > File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bart.py", line 1103, in forward > loss = F.cross_entropy(logits.view(-1, self.config.num_labels), labels.view(-1)) > File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py", line 2021, in cross_entropy > return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) > File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py", line 1838, in nll_loss > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) > RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward > ``` Hello ! I encountered the same problem when using the STSB dataset for fine-tuning BERT. How did you solve this problem?
transformers
4,827
closed
How to remove token ?
I only know how to add token, but how to remoce some special token
06-07-2020 04:00:29
06-07-2020 04:00:29
From what I can observe, there are two types of tokens in your tokenizer: base tokens, which can be derived with `tokenizer.encoder` and the added ones: `tokenizer.added_tokens_encoder`. Depending on which token you want to remove, you use `del tokenizer.encoder` or `del tokenizer.added_tokens_encoder`. ¡NB! Do not forget to resize the embedding layer of your model with `model.resize_token_embeddings(len(tokenizer))`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi, I can't seem to remove tokens from the main vocabulary with tokenizer.encoder. I get `AttributeError: 'BertTokenizerFast' object has no attribute 'encoder'`. Also. if we remove some tokens from the middle of the whole vocabulary file, can the model set the right embeddings for new token ids? Will the specific token ids and embeddings be removed from our vocab file and model? What I currently do: ``` del tokenizer.vocab[unwanted_words] model.resize_token_embeddings(len(tokenizer)) ``` We're decreasing vocabulary size here, but will my model understand which tokens were removed? <|||||>@mitramir55 I can't imagine how the model would know which tokens were removed from the vocabulary. I have the same question. Perhaps we would have to remove weight elements one by one from the model's lookup embeddings. Any other ideas?<|||||>@mitramir55 Does del deletes the token from the tokenizer? It didn't seem to work for me<|||||>Hi @snoop2head and @avi-jit, No, I did not delete any word from the vocabulary. If you think about it, it's not even logical to delete a word - an id in the input or output of a trained model. All I did was adding the words I wanted to be in the model's vocabulary while training , and then setting the probability of some words I didn't want to minus infinity while using the model -predicting the next word. This way the model won't choose from them and will go to the next most probable option. ``` ### Adding words before training model_path = 'HooshvareLab/distilbert-fa-zwnj-base' model = AutoModelForMaskedLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True) tokenizer.add_tokens(['this', 'that', 'those']) model.resize_token_embeddings(len(tokenizer)) # then the training... ``` Now let's say we want our trained transformer to suggest a word for an incomplete sentence without considering some specific "banned" words: ``` ### setting the probability of some words being generated to -inf all_banned_tokens = ['«', ':', '،', '/', '*', ']', '[', '؟', '…', 'ی', tokenizer.unk_token] all_banned_tokens = [i.strip() for i in all_banned_tokens] banned_ids = [] banned_ids = [i[0] for i in tokenizer.batch_encode_plus(all_banned_tokens, add_special_tokens=False).input_ids] def get_transformer_suggestions(sequence, model, tokenizer, top_k=5, banned_ids = banned_ids): """ gets a sequence of words and outputs top_k suggested words""" suggestion = [] ids_main = tokenizer.encode(sequence, return_tensors="pt", add_special_tokens=True) ids_ = ids_main.detach().clone() position = torch.where(ids_main == tokenizer.mask_token_id) positions_list = position[1].numpy().tolist() model_logits = model(ids_)['logits'][0][positions_list[0]] model_logits[banned_ids] = -math.inf top_k_tokens = torch.topk(model_logits, top_k, dim=0).indices.tolist() for j in range(len(top_k_tokens)): suggestion.append(tokenizer.decode(top_k_tokens[j])) return suggestion candidates = get_transformer_suggestions(input_sentence = f'this is an amazing {tokenizer.mask_token}', model= model, tokenizer=tokenizer, top_k=5, anned_ids=banned_ids) ``` I hope this was helpful. Tell me if there is anything else I can explain to make it clear. <|||||>@mitramir55 There are occasions where you want to delete tokens from the tokenizer and resize the embedding layer accordingly. Just like I stated in issue #15032 , there are tokens such as `[unused363]`. I am figuring out way how to remove the surplus of 500 tokens from the tokenizer. Thank you for your kind explanation though!<|||||>Hi @snoop2head , I'm not sure what you want to do exactly, but I think [this post](https://github.com/huggingface/transformers/issues/1083#issuecomment-524303077) and [this one](https://github.com/huggingface/transformers/issues/4777) can be helpful. Basically, what you need to know is that you cannot change the embedding layer of a model, because this is part of a trained transformer with specific weights and layers. If you want to change the embedding, then you need to train the model. This is because each tokenizer has a `vocab.json` and `merge.txt` file that has been created during the process of training (with byte-level BPE) and if you want to change the tokenizer, you need to modify those. However, with a little search I found [this post ](https://discuss.huggingface.co/t/barttokenizer-with-vocab-json-and-merge-txt-which-were-created-by-bytelevelbpetokenizer-encode-s-into-3-tokens/3393/2 )where the author has changed the files (I think with another model's file). Maybe you can get some help from this.
transformers
4,826
closed
Fix use of mems in Transformer-XL text generation
In Transformer-XL, when ```mems``` is being used to save computation with the ```generate``` function, the inputs are not properly truncated, so that ```mems``` does not actually speed things up, and also seems to create inaccuracies in the output. I have attempted to fix this by changing Transformer-XL's ```prepare_inputs_for_generation``` function to make it more like that function as used in GPT-2. See the issue at https://github.com/huggingface/transformers/issues/4752 for more details.
06-07-2020 01:44:39
06-07-2020 01:44:39
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=h1) Report > Merging [#4826](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c58e6c129a153ca1a5021e5d7e642d00bf011e20&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4826/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4826 +/- ## ======================================= Coverage 74.52% 74.53% ======================================= Files 128 128 Lines 21497 21499 +2 ======================================= + Hits 16021 16024 +3 + Misses 5476 5475 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4826/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `77.27% <100.00%> (+0.31%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4826/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4826/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.36% <0.00%> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4826/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.27% <0.00%> (+0.12%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=footer). Last update [c58e6c1...07c4126](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hey @tommccoy1, Thanks a lot for your PR! From some initial tests with this change, the results seem good! The bug you pointed out, might also be affecting `xlnet` actually...I will have to take a deeper look into both models to fully understand what's going on with `mems`. But also given the discussion on the original Transfo-XL repo, here: https://github.com/kimiyoung/transformer-xl/issues/49 suggests that you are 100% correct with your PR here. It seems like you are passing all the tests. They will surely be one `RUN_SLOW` test that will fail, but this might also be due to prior incorrect assumptions regarding the `.generate()` function. I will check this PR next week :-) <|||||>Also related: #505<|||||>Hey, I'm taking a look at this atm - as expected by @patrickvonplaten, the slow test fails, but that's probably an issue with the slow test. I'll update it and merge the PR.<|||||>Hey, sorry for the delay - we've realized XLNet had a similar issue, and I'm opening up another PR to fix the slow tests to be consistent with this.<|||||>No worries re: the delay - thank you for looking over it & merging it!
transformers
4,825
closed
Onnx converted model has its output shape modified when compared to original (finetuned) model
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): `mrm8488/distilroberta-base-finetuned-sentiment` from the hub Language I am using the model on (English, Chinese ...): `English` The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) I use the `04-onnx-export.ipynb` Notebook and have only change the model name and the tokenizer: ![image](https://user-images.githubusercontent.com/1029874/83956354-6a799800-a85d-11ea-9b0b-b2febdd8e91a.png) The issue appeared on all finetuned model I tried, being classification or multichoice questions. The tasks I am working on is: * [ ] my own task or dataset: (give details below) * [X] an official GLUE/SQUaD task: classification ## To reproduce Steps to reproduce the behavior: Import AutoTokenizer, AutoModelForSequenceClassification and change tokenizer and model name, the section we are interested into: ```python # ... !rm -rf onnx/ from transformers.convert_graph_to_onnx import convert # Handles all the above steps for you convert(framework="pt", model="mrm8488/distilroberta-base-finetuned-sentiment", output="onnx/bert-base-cased.onnx", opset=11) # ... from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("mrm8488/distilroberta-base-finetuned-sentiment") cpu_model = create_model_for_provider("onnx/bert-base-cased.onnx", "CPUExecutionProvider") # Inputs are provided through numpy array model_inputs = tokenizer.encode_plus("My name is Bert", return_tensors="pt") inputs_onnx = {k: v.cpu().detach().numpy() for k, v in model_inputs.items()} # Run the model (None = get all the outputs) sequence, pooled = cpu_model.run(None, inputs_onnx) # Print information about outputs print(f"Sequence output: {sequence.shape}, Pooled output: {pooled.shape}") pytorch_model = AutoModelForSequenceClassification.from_pretrained("mrm8488/distilroberta-base-finetuned-sentiment") a, = pytorch_model(**model_inputs) print(f"finetune non onnx pytorch model output: {a.shape}") # ... ``` ## Expected behavior I was expecting that the onnx output shape would be the same than the non converted model output shape, but that's not the case: ```text Sequence output: (1, 6, 768), Pooled output: (1, 768) finetune non onnx pytorch model output: torch.Size([1, 6]) ``` It is like the last layer of the model related to the classification task is not taken in onnx. Does it make sense? @mfuntowicz ## Environment info Google Colab with a GPU
06-06-2020 23:31:49
06-06-2020 23:31:49
Facing the same problem with a BERT model fine-tuned on sequence classification and would love to get an answer :) <|||||>This seems to be related to [this issue](https://github.com/huggingface/transformers/issues/4788). As @hrsmanian points it, it seems that in[ convert_graph_to_onnx.py](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py), the model is currently converted by default to a 'feature-extraction' version where the classification layer is discarded. Changing the pipeline type (line 108 of the py file) to 'ner' in @hrsmanian's case seems to have worked. In the case of binary classification, I tried changing the pipeline type to 'sentiment-analysis' (my model is a binary BertForSequenceClassification) but get a ValueError (ValueError: not enough values to unpack (expected 2, got 1)) when trying to run the session. I used simpletransformers (which is based on this repo) to do binary classification with BERT, followed the instructions for conversion and inference from the [blog post](https://medium.com/microsoftazure/accelerate-your-nlp-pipelines-using-hugging-face-transformers-and-onnx-runtime-2443578f4333). Let me know if you see what the problem is @mfuntowicz :) <|||||>Actually, I managed to make it work. The problem was that the session.run output shape changed and so writing: `output, pooled = session.run(None, tokens)` was not working anymore. When only writing `output = session.run(None, tokens)`, it works and I get the classification scores. ![image](https://user-images.githubusercontent.com/29440170/84197042-879ea880-aaa1-11ea-878d-8e344c04d92c.png) Hope that helps :) <|||||>@manueltonneau You're right, we're currently enforcing the `feature-extraction` because not all our pipelines are compatible with ONNX graph representation. I'll have a look asap to identify which pipelines are compatible and which are not, so what we can add the possibility to export other kind of pipeline through the script. <|||||>Tks @manueltonneau , works for me too! Btw you may prefer `output, = ...` to avoid the list :-) @mfuntowicz would it be possible to have a pipeline for the `multichoice` task (and a related onnx converter too if this is onnx compatible)? Not sure why it doesn't exist yet btw as all models I have used support the task.<|||||>It might be possible for pipelines such as **token classification** and **sequence classification** to be exportable out of the box. These pipelines generally just add a projection layer on top of the model followed by a argmax. All of these operators are natively supported by ONNX. For more complex pipeline such as **qa** or **generation**, ONNX might not support all the operators used in the post-processing steps (i.e. _sampling_, _answer span extraction_) and thus would lead to the impossibility to export the model to ONNX. <|||||>This is a very good news! So in theory a multichoice pipeline should work as it s just a projection like classification but with a different shape, am I right? Would it be possible for your team to support this task on the pipeline?<|||||>I have another question, looking at the `convert` function code, the dumb input used to guess the architecture of the model in torch script is: ```python tokens = nlp.tokenizer.encode_plus("This is a sample output", return_tensors=framework) ``` My understanding is that onnx uses torch script and torch script can only guess a fix input length. [Doc here](https://huggingface.co/transformers/torchscript.html#dummy-inputs-and-standard-lengths) @mfuntowicz Does that mean that onnx model truncates all inputs to less than 10 tokens? @manueltonneau On your model, does onnx predictions the same than pytorch ones? (for the same input) My model is based on the `multichoice` task and it doesn't work (it compiles but the predictions are wrong). I don't know if it s because some input truncation or just because of the task. <|||||>@pommedeterresautee You're right here about how PyTorch & ONNX interact together. ONNX leverage the tracing provided by PyTorch to construct the ONNX IR. However on the input, it should not truncate anything because `convert_graph_to_onnx.py` exports the inputs with the[ sequence axis being dynamic](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py#L81) ```python # Generate input names & axes input_vars = list(tokens.keys()) input_dynamic_axes = {k: build_shape_dict(v, True, seq_len) for k, v in tokens.items()} ``` You can set a breakpoint on [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py#L97) and see the actual axes being dynamic (input and output). If you find any incoherent behaviour we can dig further to understand why dynamic axes are not correctly exported in your case 👍 <|||||>First, I have tried with a long sequence on classification task and it works (results are the same). Anyway, tks @mfuntowicz for the clear explanation Not a big surprise, the converter doesn't work when the task is `multichoice` and the pipeline used in the converter is "sentiment-analysis" (because the multichoice pipeline doesn't exist). * **What can I do to get the support of a multichoice task pipeline and check if onnx works in this setup?** Code to reproduce ```python import torch from transformers import AutoTokenizer, AutoModelForMultipleChoice from transformers.convert_graph_to_onnx import convert from onnxruntime import InferenceSession, SessionOptions, get_all_providers def create_model_for_provider(model_path: str, provider: str) -> InferenceSession: assert provider in get_all_providers(), f"provider {provider} not found, {get_all_providers()}" # Few properties than might have an impact on performances (provided by MS) options = SessionOptions() options.intra_op_num_threads = 1 # Load the model as a graph and prepare the CPU backend return InferenceSession(model_path, options, providers=[provider]) tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base', use_fast=False) model = AutoModelForMultipleChoice.from_pretrained(pretrained_model_name_or_path="output/xlm-r") device = torch.device(device='cuda') model.to(device) model.eval() convert(framework="pt", model="output/xlm-r", tokenizer='xlm-roberta-base', output="output/onnx/xlm-r.onnx", opset=11) model_onnx = create_model_for_provider("output/onnx/xlm-r.onnx", "CUDAExecutionProvider") inputs = tokenizer.encode_plus("hello les amis, comment allez vous ? Moi pas mal", "je vais très bien") torch_inputs = {k: torch.tensor([[v, v]], dtype=torch.long).to(device) for k, v in inputs.items()} output_pytorch = model(**torch_inputs) inputs_onnx = {k: v.cpu().detach().numpy() for k, v in torch_inputs.items()} sequence, = model_onnx.run(None, inputs_onnx) ``` It crashes with: ```python Traceback (most recent call last): File "/home/geantvert/.local/share/virtualenvs/***/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-11-f614fb04d5d2>", line 7, in <module> sequence, = model_onnx.run(None, inputs_onnx) File "/home/geantvert/.local/share/virtualenvs/***/lib/python3.8/site-packages/onnxruntime/capi/session.py", line 111, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid rank for input: input_ids Got: 3 Expected: 2 Please fix either the inputs or the model. ``` <|||||>>@manueltonneau On your model, does onnx predictions the same than pytorch ones? (for the same input) Sorry for the late reply @pommedeterresautee. I did three tests and the predictions are almost exactly the same for all three. <|||||>Now pipeline option can be provided via arguments to [`convert_graph_to_onnx.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py#L32) using `--pipeline` argument: Valid Options are: ``` SUPPORTED_PIPELINES = [ "feature-extraction", "ner", "sentiment-analysis", "fill-mask", "question-answering", "text-generation", "translation_en_to_fr", "translation_en_to_de", "translation_en_to_ro", ] ```<|||||>Hi, is there any support for sequence classfication on sentence pairs?<|||||>sentence pairs are managed by the tokenizer, at the end it's just a sequence of tokens... so classic sequence classification pipeline works out of the box<|||||>> sentence pairs are managed by the tokenizer, at the end it's just a sequence of tokens... so classic sequence classification pipeline works out of the box Works well. Thanks<|||||>I am facing issue while solving for multi-class classification problem.<|||||>My problem also got solved when I used the pipeline 'sentiment-analysis'
transformers
4,824
closed
Top-k sampling and top-p sampling for generating phrases on batches with GPT-2?
How can I generate on batches with GPT-2 if I want to make use of these awesome sampling techniques: [top-k sampling and top-p sampling](https://huggingface.co/blog/how-to-generate)?. There is this implementation for generating phrases on batches already in issue [#3021](https://github.com/huggingface/transformers/issues/3021). Any advice? thanks! @patrickvonplaten
06-06-2020 23:07:49
06-06-2020 23:07:49
It's on our ToDo-List :-) Currently batch generation with GPT2 is not possible, so you will have to rely on the code in https://github.com/huggingface/transformers/issues/3021<|||||>Can top-k and top-p sampling be implemented in batches?<|||||>Sure, the provided `top-k-top-p sampling` function provided that :-)
transformers
4,823
closed
Discriminative fine-tuning for new (added) words
Good afternoon, I have a question about the way of changing the learning rate for some parameters of the model. Let's say we have a BERT model and we have added a few new tokens. Consequently, we need to resize the embedding layer and initialize the embeddings for new words randomly. Meanwhile the learning rate of 3e-5 is used to train the model (otherwise we "overjump" the global minimum). If we use this LR for the embeddings of the new words as well, they will hardly change and thus will be close to random, thereby, the reasonable way is to change the learning rate only for their embeddings (as it is done, for instance, in ULMFiT). The question is: is there a simple way to do it in HuggingFace? Or probably are there some examples of doing it? Thanks in advance!
06-06-2020 23:00:42
06-06-2020 23:00:42
Finally found the solution. The optimizer should look like this: ``` optim.SGD([ {'params': model.base.parameters()}, {'params': model.classifier.parameters(), 'lr': 1e-3} ], lr=1e-2, momentum=0.9)) ```<|||||>However, this approach does not work when I need to specify another LR not for the whole layer, but only for a few weights from it. Hence, the issue is still open. @LysandreJik @patrickvonplaten will be very grateful if you could help. Should a bit specify the question, I need to fine-tune the model (as proposed [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling)), specifying special learning rate for only the new word embeddings (_ideally_) or at least for the whole embedding matrix.<|||||>Hi @Aktsvigun, Thanks for your detailed question! I'm not super familiar with these kind of specifics for training, but I'm not sure that this is even possible in PyTorch. Also, did you try to fine-tune the model normally as well without setting a specific learning rate to only one parameter of a layer? I would expect normal fine-tuning to also work quite well since the gradient (independently of the lr) for the newly added weight will be quite high and thus change significantly even for the same learning rate for all parameters.<|||||>@patrickvonplaten thank you for the answer! I did try with a simple LR but what urged me to the question is the difference in the results of a language model, fine-tuned without adding new words and the one with changed vocab. I took a small dataset (_~ 25 000 sentences_) and fine-tuned both models with `lr = 3e-5` (pretty standard as I know) and `num_epochs = 10`. The model without new words had an eval loss (on the validation sample) of **2.55**, while the loss for the one with 250 new words (not much really when the vocab size equals 50250) equaled **2.98**. The words are not so popular among the dataset itself (only 13 of them belong to top-250 most frequent words), therefore such a great difference can be caused only by underfitting in my view.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,822
closed
EncoderDecoderModel forwards return different values every time.
# 🐛 Bug in EncoderDecoderModel I am using EncoderDecoderModel and I have tested the sample code of it which is written in its page. ``` from transformers import EncoderDecoderModel, BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = EncoderDecoderModel.from_encoder_decoder_pretrained( 'bert-base-uncased', 'bert-base-uncased') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) output = model(input_ids=input_ids, decoder_input_ids=input_ids)[0] ``` but every time I run this code i will get different values for output! I also have used model.eval() but it also couldn't help.
06-06-2020 22:12:34
06-06-2020 22:12:34
Could you please provide the whole code you use? Your structure works ideally for me, this code outputs the same values: ``` from transformers import EncoderDecoderModel, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = EncoderDecoderModel.from_encoder_decoder_pretrained( 'bert-base-uncased', 'bert-base-uncased') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) outputs = [] for _ in range(5): result = model(input_ids=input_ids, decoder_input_ids=input_ids)[0] outputs.append(result) outputs ``` <|||||>I see, on each step you initialize your EncoderDecoder model. AFAIU the difference is caused by a randomly initialized layers for decoder in this architecture. You can check it with this code: ``` params1, params2, models = [], [], [] for _ in range(2): tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = EncoderDecoderModel.from_encoder_decoder_pretrained( 'bert-base-uncased', 'bert-base-uncased') models.append(model) pars = models[0].decoder.bert.encoder.parameters() for _ in range(1000): try: params1.append(next(pars)) except: break pars = models[1].decoder.bert.encoder.parameters() for _ in range(1000): try: params2.append(next(pars)) except: break [torch.all(params1[i] == params2[i]).item() for i in range(len(params1))] ```<|||||>Thanks for answering @Aktsvigun ! Yes, in the encoder decoder framework, when you instantiate an encoder-decodel using two pretrained BERT models the cross attention layer weights are added and randomly initialized. This is an expected behavior. When you set your log level to INFO you will receive a notification about this as well :-)
transformers
4,821
closed
Enable multiprocessing in glue datasets
The preprocessing of glue datasets is too slow. This change enables multiprocessing to speed up the process of converting examples to features by utilizing multiple cpu cores.
06-06-2020 21:16:43
06-06-2020 21:16:43
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=h1) Report > Merging [#4821](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c58e6c129a153ca1a5021e5d7e642d00bf011e20&el=desc) will **increase** coverage by `1.54%`. > The diff coverage is `88.88%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4821/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4821 +/- ## ========================================== + Coverage 74.52% 76.07% +1.54% ========================================== Files 128 128 Lines 21497 21505 +8 ========================================== + Hits 16021 16360 +339 + Misses 5476 5145 -331 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.48% <88.88%> (+0.12%)` | :arrow_up: | | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `40.95% <0.00%> (-8.49%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.42% <0.00%> (+0.18%)` | :arrow_up: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.26% <0.00%> (+6.29%)` | :arrow_up: | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <0.00%> (+75.48%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=footer). Last update [c58e6c1...ef63cb8](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,820
closed
Updates args in tf squad example.
Updates example for execution of `run-tf-squad.py` due to changes in https://github.com/huggingface/transformers/pull/4530, particularly removal of `mode` and `optimizer_name`.
06-06-2020 20:22:07
06-06-2020 20:22:07
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=h1) Report > Merging [#4820](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c58e6c129a153ca1a5021e5d7e642d00bf011e20&el=desc) will **increase** coverage by `1.63%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4820/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4820 +/- ## ========================================== + Coverage 74.52% 76.15% +1.63% ========================================== Files 128 128 Lines 21497 21497 ========================================== + Hits 16021 16372 +351 + Misses 5476 5125 -351 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (-1.36%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (-0.16%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.42% <0.00%> (+0.18%)` | :arrow_up: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.26% <0.00%> (+6.29%)` | :arrow_up: | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <0.00%> (+75.48%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=footer). Last update [c58e6c1...e7a60ca](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>LGTM, thanks! (cc'ing @jplu)
transformers
4,819
closed
Export PretrainedBartModel from __init__
`PretrainedBartModel ` is currently not being exported so one has to manually do ```python from transformers.modeling_bart import PretrainedBartModel ``` This required behaviour is different from other models which do expose their PretrainedModel in `__init__`.
06-06-2020 18:53:58
06-06-2020 18:53:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=h1) Report > Merging [#4819](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c58e6c129a153ca1a5021e5d7e642d00bf011e20&el=desc) will **increase** coverage by `1.62%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4819/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4819 +/- ## ========================================== + Coverage 74.52% 76.15% +1.62% ========================================== Files 128 128 Lines 21497 21497 ========================================== + Hits 16021 16371 +350 + Misses 5476 5126 -350 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (-1.36%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (-0.16%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.42% <0.00%> (+0.18%)` | :arrow_up: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.26% <0.00%> (+6.29%)` | :arrow_up: | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <0.00%> (+75.48%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=footer). Last update [c58e6c1...98b2dde](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,818
closed
Enable multiprocessing in glue datasets
The preprocessing of glue datasets is too slow. This change enables multiprocessing to speed up the process of converting examples to features by utilizing multiple cpu cores.
06-06-2020 18:37:05
06-06-2020 18:37:05
transformers
4,817
closed
Question: Where do I find the Transformer model from the paper "Attention is all you need" ?
Hello Firstly, thanks for supporting all questions here. I read the paper "Attention is all you need" and wondering which class should I use in the HuggingFace library to use the Transformer architecture used in the paper. Can you please advise? Thanks Abhishek
06-06-2020 10:34:56
06-06-2020 10:34:56
You don't need this library if you only want the transformer module specifically. PyTorch: https://pytorch.org/docs/master/generated/torch.nn.Transformer.html TensorFlow: https://www.tensorflow.org/tutorials/text/transformer#create_the_transformer<|||||>Thanks @BramVanroy
transformers
4,816
closed
NER pipeline: Inconsistent entity grouping
# 🐛 Bug ## Information "mrm8488/bert-spanish-cased-finetuned-ner" Language I am using the model on (English, Chinese ...): Spanish The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. create a `ner` pipeline 2. pass flag `grouped_entities` 3. entities are not grouped as expected see sample below ```python NER_MODEL = "mrm8488/bert-spanish-cased-finetuned-ner" nlp_ner = pipeline("ner", model=NER_MODEL, grouped_entities=True, tokenizer=(NER_MODEL, {"use_fast": False})) t = """Consuelo Araújo Noguera, ministra de cultura del presidente Andrés Pastrana (1998.2002) fue asesinada por las Farc luego de haber permanecido secuestrada por algunos meses.""" ner(t) >>> [ {'entity_group': 'B-PER', 'score': 0.901019960641861, 'word': 'Consuelo'}, {'entity_group': 'I-PER', 'score': 0.9990904808044434, 'word': 'Araújo Noguera'}, {'entity_group': 'B-PER', 'score': 0.9998136162757874, 'word': 'Andrés'}, {'entity_group': 'I-PER', 'score': 0.9996985991795858, 'word': 'Pastrana'}, {'entity_group': 'B-ORG', 'score': 0.9989739060401917, 'word': 'Far'}] ``` ## Expected behavior ### Inconsistent grouping I expect the first two items of the given sample( `B-PER`, and `I-PER`) to be grouped. As they are contiguous tokens and correspond to a single entity spot. It seems the current code does not take into account `B` and `I` tokens. expected output: ``` {'entity_group': 'I-PER', 'score': 0.9990904808044434, 'word': ' Consuelo Araújo Noguera'}, {'entity_group': 'I-PER', 'score': 0.9998136162757874, 'word': 'Andrés Pastrana'}, {'entity_group': 'B-ORG', 'score': 0.9989739060401917, 'word': 'Farc'}] ``` ### Lost tokens? for the same input, passing `grouped_entities=False` generates the following output: ``` [ {'word': 'Cons', 'score': 0.9994944930076599, 'entity': 'B-PER', 'index': 1}, {'word': '##uelo', 'score': 0.802545428276062, 'entity': 'B-PER', 'index': 2}, {'word': 'Ara', 'score': 0.9993102550506592, 'entity': 'I-PER', 'index': 3}, {'word': '##új', 'score': 0.9993743896484375, 'entity': 'I-PER', 'index': 4}, {'word': '##o', 'score': 0.9992871880531311, 'entity': 'I-PER', 'index': 5}, {'word': 'No', 'score': 0.9993029236793518, 'entity': 'I-PER', 'index': 6}, {'word': '##guera', 'score': 0.9981776475906372, 'entity': 'I-PER', 'index': 7}, {'word': 'Andrés', 'score': 0.9998136162757874, 'entity': 'B-PER', 'index': 15}, {'word': 'Pas', 'score': 0.999740719795227, 'entity': 'I-PER', 'index': 16}, {'word': '##tran', 'score': 0.9997414350509644, 'entity': 'I-PER', 'index': 17}, {'word': '##a', 'score': 0.9996136426925659, 'entity': 'I-PER', 'index': 18}, {'word': 'Far', 'score': 0.9989739060401917, 'entity': 'B-ORG', 'index': 28}, {'word': '##c', 'score': 0.7188423275947571, 'entity': 'I-ORG', 'index': 29}] ``` when using `grouped_entities` the last entity `word` (`##c`) got lost, it is not even considered as a different group ` {'entity_group': 'B-ORG', 'score': 0.9989739060401917, 'word': 'Far'}]` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: OSX - Python version: 3.7 - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?): - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
06-06-2020 08:57:29
06-06-2020 08:57:29
@dav009 Thanks for posting this issue! 1. **Inconsistent grouping** - correct that `B` and `I` tokens are not yet considered. Will have to include this in a new PR. 2. **Lost tokens** - the skipped tokens are those with an entity type found in the `ignore_labels` argument for `TokenClassificationPipeline`, which is set as `["O"]` by default. If you don't want to skip any token, you can just set `ignore_labels=[]`. I'm happy to work on `1` within the next week or so since I've already been planning to apply this fix. <|||||>@enzoampil 👋 thanks for your prompt answer > Lost tokens - the skipped tokens are those with an entity type found in the ignore_labels argument for TokenClassificationPipeline, which is set as ["O"] by default. If you don't want to skip any token, you can just set ignore_labels=[]. in the given sample, the missing entity is not tagged as `O` : - `##c` is tagged as `I-ORG` in (`grouped_entities =False`) `{'word': '##c', 'score': 0.7188423275947571, 'entity': 'I-ORG', 'index': 29}]` however it did not get included in the grouping results (`grouped_entities =True`)<|||||>@dav009 Understand now! Thanks for clarifying. Yes, it does seem to be related to the I and B issue. Think can handle this in the same PR.<|||||>@dav009 I handled a similar scenario for grouping the Begin and Info tags. The below code helps to **merge the tokens between Begin and Info tags**. Please adapt to your use `def group_entities(self, prediction_results_list: List[Dict] ) -> List[RecordDataResponse]: final_prediction_list = [] # Group the prediction list by the last 3 characters of the tag # and group the results appropriately # B-PER-TAG -> TAG # B-PER -> PER tmp_dict = defaultdict(list) added_index = 0 prev_index = 0 for index, entity in enumerate(prediction_results_list): try: if entity['entity_group'].startswith("B") and \ prediction_results_list[index + 1]['entity_group'].startswith("I"): tmp_dict[index].append(entity) added_index = index elif entity['entity_group'].startswith("I"): if (1 == abs(added_index - index)) or (1 == abs(prev_index - index)): tmp_dict[added_index].append(entity) prev_index = index else: tmp_dict[index].append(entity) except IndexError: tmp_dict[index].append(entity) # Flatten the sub-lists final_grouped_list = list(map(list, map(itertools.chain, tmp_dict.values()))) for entity_group_list in final_grouped_list: # Get the unique number of entities per list _entity_count = len( set( [ prediction_input["entity_group"] for prediction_input in entity_group_list ] ) ) if entity_group_list: if len(entity_group_list) > 1: # Get the tag name tag_name = str(entity_group_list[0]["entity_group"][-3:]) # Join the entities entity_value = " ".join( [ prediction_input["word"] for prediction_input in entity_group_list ] ) # Remove duplicate names _temp_entities = entity_value.split() entity_value = " ".join( sorted(set(_temp_entities), key=_temp_entities.index) ) # Compute the average of confidence scores mean_score = np.mean( [ prediction_input["score"] for prediction_input in entity_group_list ] ) # Frame the entities and ensure name is atleast has more than 1 character if len(entity_value) > 1: final_prediction_list.append( RecordDataResponse( entity_group=tag_name, score=mean_score, word=entity_value, ) ) else: [ final_prediction_list.append( RecordDataResponse( entity_group=entity_group["entity_group"][-3:], score=entity_group["score"], word=entity_group["word"], ) ) for entity_group in entity_group_list if len(re.sub(r"(?i)[^-0-9a-z\\s.,]+", "", entity_group["word"])) > 1 ] # Sort the by the list by confidence score and return in descending order return sorted(final_prediction_list, key=lambda x: x.score, reverse=True) ` The code is invoked from the **pipeline**: `prediction_results_list = [ prediction for prediction_input in prediction_input_list for prediction in self.model_prediction_pipeline(prediction_input) if prediction and prediction["word"] not in self.stop_list ] # Return the predictions return ( self.group_entities(prediction_results_list) if prediction_results_list else [] )`<|||||>@dav009 Opened a PR (above) that should resolve this :smile:<|||||>@enzoampil Just curious to know if your PR can handle the merging of multiple entities. `entities_list = [ {"word": "Patient", "score": 0.9977793097496033, "entity_group": "B-PER-TAG"}, {"word": "Name", "score": 0.9968074560165405, "entity_group": "I-PER-TAG"}, {"word": "Cecil", "score": 0.9995920658111572, "entity_group": "B-PER"}, {"word": "D . Thomas", "score": 0.9938908666372299, "entity_group": "I-PER"}, {"word": "Thomas", "score": 0.9993066191673279, "entity_group": "B-PER"} ]` In this case, I would expect the below output after the entities are grouped: `[ {"word": "Patient Name", "score": 0.9977793097496033, "entity_group": "PER-TAG"}, {"word": "Cecil D . Thomas", "score": 0.9995920658111572, "entity_group": "PER"}, {"word": "Thomas", "score": 0.9993066191673279, "entity_group": "PER"} ]`<|||||>@enzoampil gonna check it out. maybe part of another issue but do you get `word` fields containing `##` is that expected?<|||||>@sudharsan2020 Setting `grouped_entities=True` should work for your example under the new PR, since similar entities w/ different prefixes are now grouped (e.g. "I-PER" and "B-PER") :smile:<|||||>@dav009 This is even after grouping correct? I suspect this is possible when word pieces have different core entity types (e.g. `ORG` vs `PER`). Can you give an example?<|||||>Hello, I think @dav009 is refering to this : ```Python from transformers import AutoModelForTokenClassification, AutoTokenizer import torch from transformers import TokenClassificationPipeline model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english") tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") nlp = TokenClassificationPipeline( model=model, tokenizer=tokenizer, grouped_entities=True ) sequence = "In addition , the Blabla Group has completed the acquisition of ISO / TS16949 certification ." res = nlp(sequence) print(res) ``` I have this as a result : `[{'entity_group': 'I-ORG', 'score': 0.9988919496536255, 'word': 'Blabla Group'}, {'entity_group': 'I-MISC', 'score': 0.9711909890174866, 'word': 'ISO'}, {'entity_group': 'I-ORG', 'score': 0.6591967344284058, 'word': 'T'}, {'entity_group': 'I-MISC', 'score': 0.5822997689247131, 'word': '##S16'}, {'entity_group': 'I-MISC', 'score': 0.5067382454872131, 'word': '##9'}]` Some word fields still have ## in it. I have just installed transformers right now (version 2.11.0) with a pip install command then paste the pipelines.py fixed in my transformers folder. <|||||>@Nighthyst can you share the result when `grouped_entities=False`?<|||||>Yes, here is a comparison of the resultats with `grouped_entities=False` or when `grouped_entities=True` : ```Python from transformers import AutoModelForTokenClassification, AutoTokenizer import torch from transformers import TokenClassificationPipeline model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english") tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") nlp_not_grouped = TokenClassificationPipeline( model=model, tokenizer=tokenizer, grouped_entities=False ) nlp_grouped = TokenClassificationPipeline( model=model, tokenizer=tokenizer, grouped_entities=True ) seq1 = "In addition , the Blabla Group has completed the acquisition of ISO / TS16949 certification ." seq2 = "Directors and certain categories of personnel , who are all included in a regularly updated list"\ ", must disclose any trades they carry out in Faurecia" seq3 = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \ "close to the Manhattan Bridge." sequences = [seq1, seq2, seq3] for i, seq in enumerate(sequences): ngrouped, grouped = nlp_not_grouped(seq), nlp_grouped(seq) print(f"===================== sentence n°{i+1}") print("---Not grouped entities---") print(ngrouped) print("---Grouped entities---") print(grouped) ``` This is the results: ``` ===================== sentence n°1 ---Not grouped entities--- [{'word': 'B', 'score': 0.9997261762619019, 'entity': 'I-ORG', 'index': 5}, {'word': '##la', 'score': 0.997683048248291, 'entity': 'I-ORG', 'index': 6}, {'word': '##bla', 'score': 0.99888014793396, 'entity': 'I-ORG', 'index': 7}, {'word': 'Group', 'score': 0.9992784261703491, 'entity': 'I-ORG', 'index': 8}, {'word': 'ISO', 'score': 0.9711909890174866, 'entity': 'I-MISC', 'index': 14}, {'word': 'T', 'score': 0.6591967344284058, 'entity': 'I-ORG', 'index': 16}, {'word': '##S', 'score': 0.658642053604126, 'entity': 'I-MISC', 'index': 17}, {'word': '##16', 'score': 0.5059574842453003, 'entity': 'I-MISC', 'index': 18}, {'word': '##9', 'score': 0.5067382454872131, 'entity': 'I-MISC', 'index': 21}] ---Grouped entities--- [{'entity_group': 'I-ORG', 'score': 0.9988919496536255, 'word': 'Blabla Group'}, {'entity_group': 'I-MISC', 'score': 0.9711909890174866, 'word': 'ISO'}, {'entity_group': 'I-ORG', 'score': 0.6591967344284058, 'word': 'T'}, {'entity_group': 'I-MISC', 'score': 0.5822997689247131, 'word': '##S16'}, {'entity_group': 'I-MISC', 'score': 0.5067382454872131, 'word': '##9'}] ===================== sentence n°2 ---Not grouped entities--- [{'word': 'F', 'score': 0.6292181611061096, 'entity': 'I-ORG', 'index': 27}, {'word': '##au', 'score': 0.7241453528404236, 'entity': 'I-LOC', 'index': 28}, {'word': '##re', 'score': 0.49484530091285706, 'entity': 'I-LOC', 'index': 29}, {'word': '##cia', 'score': 0.6472106575965881, 'entity': 'I-LOC', 'index': 30}] ---Grouped entities--- [{'entity_group': 'I-ORG', 'score': 0.6292181611061096, 'word': 'F'}, {'entity_group': 'I-LOC', 'score': 0.6220671037832896, 'word': '##aurecia'}] ===================== sentence n°3 ---Not grouped entities--- [{'word': 'Hu', 'score': 0.9995108246803284, 'entity': 'I-ORG', 'index': 1}, {'word': '##gging', 'score': 0.989597499370575, 'entity': 'I-ORG', 'index': 2}, {'word': 'Face', 'score': 0.9979704022407532, 'entity': 'I-ORG', 'index': 3}, {'word': 'Inc', 'score': 0.9993758797645569, 'entity': 'I-ORG', 'index': 4}, {'word': 'New', 'score': 0.9993405938148499, 'entity': 'I-LOC', 'index': 11}, {'word': 'York', 'score': 0.9991927742958069, 'entity': 'I-LOC', 'index': 12}, {'word': 'City', 'score': 0.9993411302566528, 'entity': 'I-LOC', 'index': 13}, {'word': 'D', 'score': 0.986336350440979, 'entity': 'I-LOC', 'index': 19}, {'word': '##UM', 'score': 0.9396238923072815, 'entity': 'I-LOC', 'index': 20}, {'word': '##BO', 'score': 0.9121386408805847, 'entity': 'I-LOC', 'index': 21}, {'word': 'Manhattan', 'score': 0.9839190244674683, 'entity': 'I-LOC', 'index': 29}, {'word': 'Bridge', 'score': 0.9924242496490479, 'entity': 'I-LOC', 'index': 30}] ---Grouped entities--- [{'entity_group': 'I-ORG', 'score': 0.9966136515140533, 'word': 'Hugging Face Inc'}, {'entity_group': 'I-LOC', 'score': 0.9992914994557699, 'word': 'New York City'}, {'entity_group': 'I-LOC', 'score': 0.9460329612096151, 'word': 'DUMBO'}, {'entity_group': 'I-LOC', 'score': 0.9881716370582581, 'word': 'Manhattan Bridge'}] ``` Everything is fine for seq3 but seq1 and seq2 have the issue. <|||||>@Nighthyst I see, you're bringing up a different issue now. This is the case where the entity type of a word's word piece, is different from other word pieces. A fix I can apply here is to automatically group word pieces together regardless of entity type. I can apply this to a new PR after merging this existing one.<|||||>@Nighthyst @enzoampil indeed that's exactly the other issue I came accross. Thanks for digging a sample for it.<|||||>Ok, I think we should open another issue for this problem : I've noticed other related issues<|||||>@Nighthyst sounds good, thanks! :) <|||||>@enzoampil I was testing with your **ner_grouping** branch locally and these are the results **before** and **after grouping**. Do you think this is the expected behaviour? **Without grouping:** `[{'word': 'Peterson', 'score': 0.999268114566803, 'entity': 'B-PER', 'index': 17}, {'word': ',', 'score': 0.9992983937263489, 'entity': 'I-PER', 'index': 18}, {'word': '##David', 'score': 0.6536518931388855, 'entity': 'I-PER', 'index': 21}, {'word': 'David', 'score': 0.974104642868042, 'entity': 'B-PER', 'index': 37}, {'word': 'Peterson', 'score': 0.9984731078147888, 'entity': 'B-PER', 'index': 106}, {'word': 'David', 'score': 0.74308180809021, 'entity': 'B-PER', 'index': 393}, {'word': 'Peterson', 'score': 0.9972764253616333, 'entity': 'B-PER', 'index': 394}]` **With grouping:** `[{'entity_group': 'B-PER', 'score': 0.9992832541465759, 'word': 'Peterson ,'}, {'entity_group': 'I-PER', 'score': 0.6536518931388855, 'word': '##David'}, {'entity_group': 'B-PER', 'score': 0.974104642868042, 'word': 'David'}, {'entity_group': 'B-PER', 'score': 0.9984731078147888, 'word': 'Peterson'}, {'entity_group': 'B-PER', 'score': 0.8701791167259216, 'word': 'David Peterson'}]` The **two I-PER entities weren't merged.** Also observed few scenarios, in which the list **filtered_labels_idx** is **empty** which throws **IndexError**. **src/transformers/pipelines.py** `last_idx, _ = filtered_labels_idx[-1]` Screenshot: https://ibb.co/JHxYgWn<|||||>Hi everyone, this PR was recently merged to resolve the original issue #4987.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,815
closed
[marian tests ] pass device to pipeline
fixes self-hosted-runner failure
06-06-2020 03:18:34
06-06-2020 03:18:34
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=h1) Report > Merging [#4815](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/56d5d160cdd177ae6e644506535b56e79feccf68&el=desc) will **decrease** coverage by `1.60%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4815/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4815 +/- ## ========================================== - Coverage 76.15% 74.54% -1.61% ========================================== Files 128 128 Lines 21497 21497 ========================================== - Hits 16371 16026 -345 - Misses 5126 5471 +345 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `17.54% <0.00%> (-75.49%)` | :arrow_down: | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.96% <0.00%> (-6.30%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.23% <0.00%> (-0.19%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (ø)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.35% <0.00%> (+1.35%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=footer). Last update [56d5d16...7ab0469](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,814
closed
TPU Training fails with --evaluate_during_training
# 🐛 Bug TPU Trainer does not seem to support `--evaluate_during_training`. When the training loop goes into logging part, the whole process just hangs up stalling training. The same code/dataset with a multi-gpu setup works well. I am trying to move my company to Huggingface so want to train models on TPUs on our dataset which hung during the logging step. I was able to replicate the behavior with run_langugage_modelling.py, and the steps to replicate this are shown below. Other observations are - I felt that multiprocessing way of doing TPU training wastes a lot of CPU memory because with large datasets one has to use a machine with 100s of GBs of RAM because the features are being replicated 8 times in memory. Another bug is that with TPU training there are 8 WandB runs generated and it creates a lot of clutter. Suggestions to fix this would be to only do wandb logging from a single process. If its unavoidable to generate 8 wandb runs, tag all the runs to belong to a single 'group' that leads to better organization of the runs. (https://docs.wandb.com/library/advanced/grouping) ## Information Model I am using (Bert, XLNet ...): Roberta with run_language_modelling.py to replicate, T5 with our internal data. Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create a new n1-highmem-32 machine with debian-9-torch-xla OS image in us-central1-c zone 2. `conda activate torch-xla-nightly` and start a v2-8 TPU in us-central1-c zone. Set the TPU env vars 3. Use the master branch of transformers 4. Download Wikitext 103 raw char level data from https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip (according to examples for run_language_modelling). Extract it 5. Run the example script ``` export TRAIN_FILE=/path/to/dataset/wiki.train.raw export TEST_FILE=/path/to/dataset/wiki.test.raw python xla_spawn.py --num_cores 8 language_modelling/run_language_modeling.py \ --output_dir=output \ --model_type=roberta \ --model_name_or_path=roberta-base \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE \ --mlm --evaluate_during_training --per_device_train_batch_size=4 --per_device_eval_batch_size=4 ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> When it hangs, the tqdm counter is stuck at step 499 (with 500 as the logging interval) and nothing happens. When I do a Keyboard Interrupt, I get this stack trace. ``` main() File "../../../vendor/transformers/examples/xla_spawn.py", line 68, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 296, in spawn start_method=start_method) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes while not context.join(): File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 78, in join timeout=timeout, File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/multiprocessing/connection.py", line 911, in wait ready = selector.select(timeout) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/selectors.py", line 376, in select fd_event_list = self._poll.poll(timeout) KeyboardInterrupt ``` ## Expected behavior Being able to log validation set loss during training <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> `transformers` version: 2.11.0 - Platform: Linux-4.9.0-12-amd64-x86_64-with-debian-9.12 - Python version: 3.6.10 - PyTorch version (GPU?): 1.6.0a0+03eca38 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: Yes, 8 core parallelism with xla_spawn.py
06-06-2020 02:06:40
06-06-2020 02:06:40
Had the same problem with TPU. `--logging_step` seems to freeze everything. I have removed logging and then evaluate it after training. <|||||>Hi, I fail to reproduce this on `master` following your steps. Can you try pulling from master and letting me know if the issue is resolved? If it's not I'll take a deeper look. You can set `--logging_steps=10` so that to reduce the time it takes to get to the hang. I can, however, reproduce the issue with wandb. I'm looking into it now.<|||||>Interesting, I retried the same instruction from master with --logging_steps as 50 and it did evaluate the first time but then it again got stuck at the second evaluation attempt at step 99. Something is flaky and not right... Also now that I got at least one step of evaluation working, I notice that it prints 8 different eval_loss values, one for each process. Not sure how to interpret this. I haven't looked into the logic but looks like the evaluator also splits the eval_data into 8 parts and calculates the eval_loss on them individually without aggregating them into a single final eval_loss for the whole eval dataset. This defeats the purpose of evaluating during training.<|||||>Indeed, something's not right. I'm taking a look.<|||||>This was working well on 26-27th May. I tried going back to that commit but same error. Maybe something with XLA?<|||||>I don't really know, now for some reason it decides to not hang, while it did hang the first time this morning. Even with a clean environment, it doesn't hang anymore on my side. I'm still investigating<|||||>Another really weird bug is that setting --logging_steps to 0 leads to the training hanging up at step 99. I reproduced this same behavior in two different setups. I was using this option to stop logging which would hopefully bypass this above bug with this line of trainer:493 ``` if (self.args.logging_steps > 0 and self.global_step % self.args.logging_steps == 0) or ( self.global_step == 1 and self.args.logging_first_step ): ``` I believe this is causing that bug ``` if os.getenv("WANDB_WATCH") != "false": wandb.watch( self.model, log=os.getenv("WANDB_WATCH", "gradients"), log_freq=max(100, self.args.logging_steps) ) ```<|||||>Setting WANDB_WATCH = false fixed the bug, it also evaluates during training now. Starting a PR...<|||||>Great. But Maybe there can be something with XLA? that WandB gradients are not logged and the training freezes?<|||||>I am not sure if wandb supports logging of gradients with Pytorch/XLA. I reached out to Wandb to ask about this, should get a reply by tomorrow. It is possible that Pytorch/XLA does not support gradient logging as well. I looked at the XLA github repo and couldn't find a mention of gradients logging with TPUs. I am unfamiliar with XLA interface with wandb and not keen on digging deeper into this. Hopefully wandb offers more clarity soon.<|||||>I'm one of the founders of wandb. We're digging into the root cause of this now. We're planning to issue a new release ASAP to ensure users can never get into this hung state. I'll update the thread here. For anyone finding this thread online and hitting the issue, you can add the following code to disable the gradient monitoring in wandb with huggingface. ``` import os os.environ["WANDB_WATCH"] = "false" ``` Or if you're shelling out to a python script: ``` export WANDB_WATCH=false python your_script.py ```<|||||>Thank you Chris for looking into this!<|||||>@vanpelt The wandb gradient logging has been disabled with PR https://github.com/huggingface/transformers/pull/4926 . Once the Wandb fixes the gradient logging for Pytorch/XLA, we can re-enable this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,813
closed
Is albert lm finetuning with SOP in Pytorch supported?
# ❓ Questions & Help Hello, I am trying to use transfer learning on the albert language model. Before i train it on Squad. Does run_language_modeling.py support albert models and SOP ? Thank you <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
06-05-2020 23:25:13
06-05-2020 23:25:13
Hi, `run_language_modeling.py` does support the Albert model. It only does MLM though, no SOP.<|||||>I see, am i correct in assuming that pretraining/finetuning the albert model with run_language_modeling.py which only supports MLM task, would result in lower performance, vs training with a script from another library (such as the original Albert repo from google) which supports SOP? Thank you <|||||>It might result in lower performance, indeed. Adding the SOP task shouldn't be too hard, as the layer used for SOP are implemented. You can check this issue for more information https://github.com/huggingface/transformers/issues/2671.
transformers
4,812
closed
[cleanup/marian] pipelines test and new kwarg
avoids DeprecationWarning (because `max_len` kwarg is being deprecated)
06-05-2020 22:35:06
06-05-2020 22:35:06
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=h1) Report > Merging [#4812](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/875288b344d2181b789746e27e7b5bc62df8cae1&el=desc) will **decrease** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4812/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4812 +/- ## ========================================== - Coverage 76.18% 76.15% -0.03% ========================================== Files 128 128 Lines 21497 21497 ========================================== - Hits 16377 16371 -6 - Misses 5120 5126 +6 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.79% <ø> (ø)` | | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (-1.36%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.68% <0.00%> (-0.12%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=footer). Last update [875288b...b7b9470](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,811
closed
Add model and doc badges
Add badges at each model for: - the page with all community models - the documentation of the model Remove the manual doc links as a result.
06-05-2020 21:52:34
06-05-2020 21:52:34
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=h1) Report > Merging [#4811](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/875288b344d2181b789746e27e7b5bc62df8cae1&el=desc) will **decrease** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4811/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4811 +/- ## ========================================== - Coverage 76.18% 76.15% -0.03% ========================================== Files 128 128 Lines 21497 21497 ========================================== - Hits 16377 16372 -5 - Misses 5120 5125 +5 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.41% <0.00%> (-0.79%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (ø)` | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=footer). Last update [875288b...8ccd73a](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Great
transformers
4,810
closed
[Benchmark] Add encoder decoder to benchmark and clean labels
This PR cleans the benchmark utils a bit more: - tracing is made independent from CPU memory benchmarking - possibility to benchmark encoder-decoder models is added - 3 new tests - general refactoring
06-05-2020 21:47:58
06-05-2020 21:47:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=h1) Report > Merging [#4810](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b6f365a8ed32eca20034084f74450723414b5de6&el=desc) will **increase** coverage by `1.19%`. > The diff coverage is `76.19%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4810/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4810 +/- ## ========================================== + Coverage 75.36% 76.55% +1.19% ========================================== Files 128 128 Lines 21497 21531 +34 ========================================== + Hits 16201 16484 +283 + Misses 5296 5047 -249 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/4810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `68.68% <70.00%> (+26.22%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_args\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `96.87% <100.00%> (+0.10%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `67.24% <100.00%> (+23.67%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.24%)` | :arrow_up: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.75% <0.00%> (+54.43%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=footer). Last update [b6f365a...49713d9](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,809
closed
[EncoderDecoderConfig] automatically set decoder config to decoder
When instantiating an encoder-decoder configuration from two pretrainied configs, the decoder config should automatically set to `config.is_decoder=True`. In general, whenever we instantiate an encoder decoder model, no matter how, the resulting decoder config should have the attribute `decoder.is_decoder=True`. This PR also adds a couple of tests to make sure that an encoder-decoder model can be instantiated from two configs over the encoder decoder config class.
06-05-2020 20:48:50
06-05-2020 20:48:50
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=h1) Report > Merging [#4809](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/47a551d17b6ed2eaf03301f049006d559fca5cf3&el=desc) will **decrease** coverage by `1.41%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4809/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4809 +/- ## ========================================== - Coverage 77.14% 75.72% -1.42% ========================================== Files 128 128 Lines 21073 21075 +2 ========================================== - Hits 16256 15959 -297 - Misses 4817 5116 +299 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `35.71% <0.00%> (-2.75%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.70% <0.00%> (-74.83%)` | :arrow_down: | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `76.92% <0.00%> (-23.08%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.18% <0.00%> (-6.35%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.77% <0.00%> (-0.19%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.17% <0.00%> (+0.23%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <0.00%> (+14.10%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=footer). Last update [47a551d...6d8a589](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Merging for now since this is still unreleased code.<|||||>@LysandreJik - not sure what codecov complains about.
transformers
4,808
closed
Expose classes used in documentation
Currently, the documentation page of the tokenizers has three methods lacking documentation (see [here](https://huggingface.co/transformers/main_classes/tokenizer.html#pretrainedtokenizerfast)). This PR adds them to the `__init__` so sphynx can see them. If there is one that should not be public, we should remove it from the documentation.
06-05-2020 20:33:34
06-05-2020 20:33:34
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=h1) Report > Merging [#4808](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5c0cfc2cf0941d2db368767fd232d8712449c7f8&el=desc) will **increase** coverage by `0.39%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4808/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4808 +/- ## ========================================== + Coverage 76.29% 76.69% +0.39% ========================================== Files 128 128 Lines 21495 21495 ========================================== + Hits 16400 16485 +85 + Misses 5095 5010 -85 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4808/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4808/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.39% <0.00%> (+0.24%)` | :arrow_up: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4808/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `89.17% <0.00%> (+2.01%)` | :arrow_up: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4808/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.40% <0.00%> (+4.80%)` | :arrow_up: | | [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4808/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <0.00%> (+61.53%)` | :arrow_up: | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4808/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <0.00%> (+64.93%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=footer). Last update [5c0cfc2...e2a7c2d](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Can we tell sphinx to look at more than just init? <|||||>Looking at the sphinx documentation, there seems to be an option to use the modules and specify which parts of the module we want documented. Will try this as an alternative!<|||||>Looked further, but the workaround to use automodule and specifying a few functions will add the docstring of `tokenization_utils` and make the names longer (it becomes `transformers.tokenization_utils.SpecialTokensMixin` instead of `transformers.SpecialTokensMixin` which is fair enough, since it's not in transformers anymore). Avoiding the module docstring seems possible by hacking something in conf.py but it then will be done globally, and may impact some other pages... So merging this as is, and we can revisit if we really want to remove some of those things from `__init__`.
transformers
4,807
closed
Use labels to remove deprecation warnings
This is a follow-up from #4722 and remove the deprecated arguments in the tests.
06-05-2020 20:25:52
06-05-2020 20:25:52
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=h1) Report > Merging [#4807](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5c0cfc2cf0941d2db368767fd232d8712449c7f8&el=desc) will **decrease** coverage by `0.34%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4807/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4807 +/- ## ========================================== - Coverage 76.29% 75.95% -0.35% ========================================== Files 128 128 Lines 21495 21495 ========================================== - Hits 16400 16326 -74 - Misses 5095 5169 +74 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `71.83% <0.00%> (-14.56%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (-2.04%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (-0.96%)` | :arrow_down: | | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `76.65% <0.00%> (-0.92%)` | :arrow_down: | | [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `76.00% <0.00%> (-0.80%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.42% <0.00%> (-0.74%)` | :arrow_down: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.56% <0.00%> (-0.61%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <0.00%> (-0.49%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.26% <0.00%> (-0.40%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.43% <0.00%> (-0.38%)` | :arrow_down: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=footer). Last update [5c0cfc2...75f15ff](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Awesome!
transformers
4,806
closed
Albert pretrained weights change across runs.
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): TFAlbertModel Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) ``` import tensorflow as tf from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') model = TFAlbertModel.from_pretrained('albert-base-v2') model.summary() print(len(model.trainable_weights)) print(model.trainable_weights[23]) input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] outputs = model(input_ids) print(outputs[0].shape, outputs[1].shape, len(outputs)) last_hidden_states = outputs[0] print(last_hidden_states) ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) Trying to load pre-trained weights ## To reproduce Run the code above two times and you will see that the weights of the model are not the same across the two runs Steps to reproduce the behavior: 1. Run the code the first time and log the output 2. Run the code a second time and log the output 3. Check that the two logs are not the same. ## Expected behavior Since the model is loading pre-trained weights the results should be the same across runs. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux-4.4.0-179-generic-x86_64-with-debian-stretch-sid - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.0.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No I apologize if the issue is due to me misusing your library, first time using Albert.
06-05-2020 19:33:01
06-05-2020 19:33:01
I just did the same experiment with Roberta weights and did not have the same issue.<|||||>Hi, I can reproduce. This is due to the archive maps not being available anymore, and therefore the wrong ALBERT models are linked. Thanks for raising the issue, this is quite a bug. cc @julien-c <|||||>My bad! It's my fault. I added a warning to the release notes about this: https://github.com/huggingface/transformers/releases/tag/v2.11.0<|||||>Is there a plan to fix this? Looks like the issue is that the "real" model we want is named `with-prefix-tf_model.h5`, which needs to be renamed to `tf_model.h5`. https://huggingface.co/albert-base-v2#list-files<|||||>This should work now, the weights have been changed to use the `with-prefix` weights.<|||||>Thanks !!
transformers
4,805
closed
Invalid Argument for Onnxruntime Inference on GPT2
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT2 Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I've been following the ipython notebook provided [here](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb) 1. Take an off-the-shelf pretrained `gpt` model and export to onnx format using the following invocation: ``` python convert_graph_to_onnx.py --framework pt --model gpt2 gpt2.onnx ``` 2. Run inference on the exported onnx model, following the steps [here](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb). After invoking the appropriate provider, run inference using something like the following ``` model.run(None, {"input_ids": np.array([blah]), "token_type_ids": np.array([blah]), "attention_mask": np.array([blah]) ``` Note above, `blah` is replaced with actual data. After invoking the above, I get the error: ``` onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:token_type_ids ``` ## Expected behavior I would expect this to work successfully. My hypothesis is that the `convert_graph_to_onnx.py` is not exporting all the inputs from the `gpt2` model. In particular in line 43-48: ``` for arg_name in model_args_name[1:]: # start at index 1 to skip "self" argument if arg_name in input_names: ordered_input_names.append(arg_name) model_args.append(tokens[arg_name]) else: break ``` `model_args` is only populated with `input_ids` because the order of arguments in the `forward` method of `gpt2` is `input_ids, past, attention_mask, token_type_ids` so the for loop breaks early. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: Commit 0e1869cc286d607f1598506be7bd1312b76ca82c - Onnxruntime: 1.3.0 - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.0+cu101 - Using GPU in script?: Yes Thanks for your help! @mfuntowicz @tianleiwu
06-05-2020 19:28:57
06-05-2020 19:28:57
Assigning @mfuntowicz, the king of the onnx conversion!<|||||>@mihail911, do you need attention_mask and token_type_ids in input? If not, you can inference the exported model like the following: model.run(None, {"input_ids": np.array([blah])}) GPT-2 attention is unidirectional (right attends to left). User need not provide attention mask (at least for batch_size=1) and token_type_ids (Assume that all words have token type id=0). For GPT-2, it is recommended to export model with past to get better performance. Currently, convert_graph_to_onnx.py cannot export past. You can use a custom script to do that. Here is an [example]( https://github.com/microsoft/onnxruntime/blob/7c8e1580a13ce333e47a41146bccfc90b3a70db5/onnxruntime/python/tools/transformers/benchmark_gpt2.py#L246). Note that optimization for past state is ongoing, and it will be available in onnxruntime nightly build sometime next week.<|||||>thanks for the prompt response @tianleiwu! I agree that gpt2 doesn't strictly require the other parameters, but if I have a model that was trained using the token_type_id params because of having particularly formatted inputs, then not providing them at inference time may lead to decreased performance. Is there a way to provide them anyway?<|||||>@mihail911, here is example script to export model with token_type_ids (but without past input): ``` import torch from transformers import (GPT2Config, GPT2Model, GPT2Tokenizer) # use_cache is True by default in GPT2Model. Here we wrap a class to disable past state output. class GPT2ModelNoPastState(GPT2Model): def __init__(self, config): super().__init__(config) def forward(self, input_ids, attention_mask, token_type_ids): return super().forward(input_ids, past=None, attention_mask=attention_mask, token_type_ids=token_type_ids, use_cache=False) model_name="gpt2" config = GPT2Config.from_pretrained(model_name) tokenizer = GPT2Tokenizer.from_pretrained(model_name) model = GPT2ModelNoPastState.from_pretrained(model_name) example_inputs = tokenizer.encode_plus("This is a sample input", return_tensors="pt") example_outputs = model(**example_inputs) input_names = ['input_ids', 'attention_mask', 'token_type_ids'] output_names=["output_1"] dynamic_axes={'input_ids': {0: 'batch_size', 1: 'seq_len'}, 'attention_mask': {0: 'batch_size', 1: 'seq_len'}, 'token_type_ids': {0: 'batch_size', 1: 'seq_len'}, 'output_1': {0: 'batch_size', 1: 'seq_len'}} output_path="gpt2.onnx" torch.onnx.export(model=model, args=(example_inputs[input_names[0]], example_inputs[input_names[1]], example_inputs[input_names[2]]), f=output_path, input_names=input_names, output_names=output_names, example_outputs=example_outputs, dynamic_axes=dynamic_axes, do_constant_folding=True, opset_version=11, use_external_data_format=False) ``` BTW, I noticed that the token type use same embedding table as word embedding: https://github.com/huggingface/transformers/blob/c58e6c129a153ca1a5021e5d7e642d00bf011e20/src/transformers/modeling_gpt2.py#L465-L469 This looks like a bug. You might try to fix this if you want to get benefit from token_type_ids input.<|||||>Thanks for the detailed follow-up @tianleiwu! I tried executing your code and I found that the dimensions of the output seemed incorrect. The output ended up being `(batch_size, seq_length, hidden_dim)` rather than the dimensions of the prediction scores when you run the forward pass of the GPT2 model (`(batch_size, seq_length, config.vocab_size)`). This was the case even if I didn't explicitly provide the `output_names` or the dimensions in the `dynamic_axes` (i.e. I set `output_names=None`) Do you happen to know why that's the case?<|||||>@mihail911, it is expected that last dimension of first output (last_hidden_state) is hidden size as documented in code: https://github.com/huggingface/transformers/blob/a139d1a1602ee72ca98d5e0412efbd68f746d2c8/src/transformers/modeling_gpt2.py#L383 If you want prediction scores, you can try export GPT2LMHeadModel instead of GPT2Model. <|||||>@tianleiwu You are absolutely right. I accidentally missed that. This works now -- thanks for all your help!
transformers
4,804
closed
Add link to community models
06-05-2020 19:19:52
06-05-2020 19:19:52
Thanks for the review @clmnt, doing god's work
transformers
4,803
closed
[WIP] Blenderbot
**UPDATES - 14 AUGUST 2020** - Blenderbot-3B is working exactly in the same way as parlai and they have the same generation outputs - Blenderbot-90M also generates the same output as parlai in most cases but it can sometime generate a sequence with a small difference. For example: Parlai: `i' m not sure . i just feel like i ' m going to throw up . ` hf: `i don ' t know . i just feel like i ' m going to throw up .` Discrepancy could be from length penalty or some other beam search param. **Update Sep 17** @sshleifer taking over ### TODO: - check eos generation - test distilled 2b model - `AutoTokenizer`/`AutoModelForSeq2SeqLM` coverage - debug failing 3b integration test - implement backwards compatibility for the variant change by checking `config.model_type` - document `layernorm_variant` nicely or pursue alternative solution.
06-05-2020 19:13:23
06-05-2020 19:13:23
Awesome, looks like the model is soon complete :-) A couple of things I think that could be improved a bit: 1. More consistent naming with other models. For me personally, I try to write the code as similar as possible to the `bert_modeling.py` code. `input_tensor` => `hidden_states`, `incremental_state` => `past_key_value_state`, ... 2. More modularization. IMO, it's always good to have many independent classes in a model. `BlenderbotEmbeddings`, `BlenderbotEncoder`, `BlenderbotPoolingLayer`, ...Even if the forward function of these classes only has a couple of lines, it's more readable for the user and also gives you much more flexibility when you want to apply changes to the model later. I would also take a look at the BertModel for this and try to make it as similar as possible 3. Usually, the config is just passed down to the layers instead of writing out all the needed params. This has 2 advantages. 1) Less code 2) No need to set the params to default parameters in the funciton arguments that could confuse the user 4. Make the model as minimal as possible, especially as possible. What I mean by this is that the forward passes of the model should only do what there are supposed to do and we should try to avoid adding any functions that do things under the hood. For example cutting the hidden_states to its last state when using the cache (I have done the same thing in GPT2 and after discussing with @thomwolf it's quite clear now that this can lead to problems as shown in this issue: https://github.com/huggingface/transformers/issues/4368#issuecomment-630244541). This also concers any special function (-inf setting of certain tokens), which should be handled by `generate()` as it's done for Bart. Overall, I think the design as it is now fits well with the EncoderDecoder design! It looks to be very similar to Bart (pining @sshleifer here, maybe you can take a look as well). So I think you should just focus on one single forward() pass here and the incremental generation will be handled by the `generate()` method. <|||||>Just a note from my side. I'm 100% on board with inheriting from module classes, like `BartEncoder` and `BartDecoder` if they are 1-to-1 the same and only the naming has to be changed. On the other hand, I'm not 100% on board with inheriting from a class and then overwriting specific functionality that is different. I guess in the case of `RobertaTokenizer`, with this tokenizer inheriting from `GPT2Tokenizer`, it's alright because it follows already existing logic in the library, but I'm not really a fan of it. IMO, inheritance should only be done if the functionality is _1-to-1 the same_ and not if only parts of the functionality are the same. I very much like the "Composition over inheritance" principle: https://en.wikipedia.org/wiki/Composition_over_inheritance . Also, I don't really mind copy-pasting code to some degree if it gives a clear gain in flexibility, which for me is probably the most important factor to consider in fast-changing research code. For this model, I think it's great if we can reuse `BartEncoder` and `BartDecoder`, but should not abstract at a too high level if the models are just not the same (as was done in Longformer and which I want to change soon). @sshleifer I guess we have very different opinions on this case :D Also, pinging @thomwolf and @LysandreJik to hear their opinion on this <|||||>@patrickvonplaten I think we are completely aligned on the specifics. We can talk about the principles at a bar someday, but your 100% rule would have us delete `PretrainedModel` :)<|||||>I just pushed a not particularly clean but working version for blenderbot-90M as shown by `test_samgen`. There are still a few known issues, most importantly: - The model does not generate EOS Token at the end of generations. - I haven't tested the 3B model. Test is very slow (like 10 mins) on CPU. It should only run on GPU. - The tokenizers don't work (at least on my machine) - Our length_penalty implem is [different](https://github.com/facebookresearch/ParlAI/blob/22d75cbfdcf4c093b2e2c660656b65aba77bd802/parlai/core/torch_generator_agent.py#L1474) than blenderbot. We need to do the math to figure out the right number. - Bart Change: If we decide to use `config.variant` to decide the layernorm order, a practice that I took from [`parlai`](https://github.com/facebookresearch/ParlAI/blob/a20ea268f9b5ef930b97ba5c608b050f7ee63627/parlai/agents/transformer/modules.py#L445) I need to update configs and raise a DeprecationWarning for `config.normalize_before`. I can also check/fix the configs on the model zoo. My opinion are that both ways of supporting such a small difference between the variants are annoying, but this is the least annoying way to support the different order of layernorm operations for variant=='xlm' (which blenderbot-90B uses), and we already had to do it with `mbart/config.normalize_before`. I'd be happy to write a doc explaining the settings. We can also delete `aiayn` which we don't use. I also just copied the naming. It should probably be changed. Can test 3B tomorrow. I'm fine with whatever other people want to do stylistically.<|||||>> I just pushed a not particularly clean but working version for blenderbot-90M as shown by `test_samgen`. > > There are still a few known issues, most importantly: > > * The model does not generate EOS Token at the end of generations. > * I haven't tested the 3B model. Test is very slow (like 10 mins) on CPU. It should only run on GPU. > * The tokenizers don't work (at least on my machine) > * Our length_penalty implem is [different](https://github.com/facebookresearch/ParlAI/blob/22d75cbfdcf4c093b2e2c660656b65aba77bd802/parlai/core/torch_generator_agent.py#L1474) than blenderbot. We need to do the math to figure out the right number. > * Bart Change: If we decide to use `config.variant` to decide the layernorm order, a practice that I took from [`parlai`](https://github.com/facebookresearch/ParlAI/blob/a20ea268f9b5ef930b97ba5c608b050f7ee63627/parlai/agents/transformer/modules.py#L445) I need to update configs and raise a DeprecationWarning for `config.normalize_before`. I can also check/fix the configs on the model zoo. My opinion are that both ways of supporting such a small difference between the variants are annoying, but this is the least annoying way to support the different order of layernorm operations for variant=='xlm' (which blenderbot-90B uses), and we already had to do it with `mbart/config.normalize_before`. I'd be happy to write a doc explaining the settings. We can also delete `aiayn` which we don't use. I also just copied the naming. It should probably be changed. > > Can test 3B tomorrow. I'm fine with whatever other people want to do stylistically. For the 90M tokenizer I pushed a working test here: https://github.com/huggingface/transformers/pull/4803/commits/724dc8798187801f382082bf32ba6025d15426de<|||||>Updates: - both tokenizers (for 3B and 90M model) - `modeling_bart` when `variant==prelayernorm` - `special_tokens_map.json` file to replace `"sep_token": "</s>"` by `"sep_token": "__end__" - `pytorch_model.bin` 1. 3B model is working perfectly and generation output is the same as parlai 2. 90M model is working in some case but sometime it's generation output is a bit different to parlai for example: - parlai output: `__start__ i ' m not sure . i just feel like i ' m going to throw up .` - blenderbot output: `__start__ i don ' t know . i just feel like i ' m going to throw up .` Not solved yet: - Both models do not generate `eos_token`<|||||>### LayerNorm Variant Problem The problem with `layernorm_variant` is that the two blenderbot checkpoints have layernorm in **different** places. `bbot-90m.config.layernorm_variant='xlm'`, whereas `bbot-3b.config.layernorm_variant=prelayernorm` so one way I can think to do it without an if statement is separate `Blenderbot90Model` and `Blenderbot3BModel` which is inconsistent with the rest of the repo, where individual checkpoints do not have separate model classes. We would then also need two configs and two model types and probably two of some other things. ### Possible Solutions + Write a doc and markdown table containing: what is each layernorm variant+which models use it, the link to that doc both in the config/code and also from model cards. + Don't port bbot-90m. If we don't port bbot-90m, we don't need to add config.layernorm_variant -- bbot3b layers are identical to mbart layers. The issue here is that bbot-3b **barely** runs inference on 1 GPU with bs=1. + separate `Blenderbot90Model` and `Blenderbot3BModel`. + There are also solutions where we parametrize out EncoderLayer/DecoderLayer, but these seem more confusing/harder to understand/less consistent to me. I am very open to suggestions, and if I don't get any I will keep working on trying to get the forward pass into one file, as @thomwolf wrote in slack today. <|||||>Moving here for cleaner history: https://github.com/huggingface/transformers/pull/7418
transformers
4,802
closed
[cleanup] MarianTokenizer: delete unused constants
slow tests pass. The only needed constant is `vocab_files_names`
06-05-2020 18:45:16
06-05-2020 18:45:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=h1) Report > Merging [#4802](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6&el=desc) will **increase** coverage by `1.69%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4802/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4802 +/- ## ========================================== + Coverage 74.59% 76.28% +1.69% ========================================== Files 128 128 Lines 21500 21495 -5 ========================================== + Hits 16037 16397 +360 + Misses 5463 5098 -365 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.79% <ø> (-0.32%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.12%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `43.57% <0.00%> (+0.35%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (+0.94%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.17%)` | :arrow_up: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.69% <0.00%> (+2.29%)` | :arrow_up: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <0.00%> (+3.87%)` | :arrow_up: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.69% <0.00%> (+10.04%)` | :arrow_up: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.39% <0.00%> (+55.06%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=footer). Last update [acaa2e6...3f826a4](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Can anyone help with this issue: #5040 ?<|||||>on it!
transformers
4,801
closed
pip install -e does not always install the correct isort version
```bash pip install -e .["dev"] make quality ``` ### Output ```bash black --check --line-length 119 --target-version py35 examples templates tests src utils All done! ✨ 🍰 ✨ 306 files would be left unchanged. isort --check-only --recursive examples templates tests src utils ERROR: /Users/shleifer/transformers_fork/examples/benchmarking/plot_csv_file.py Imports are incorrectly sorted. ERROR: /Users/shleifer/transformers_fork/templates/adding_a_new_example_script/run_xxx.py Imports are incorrectly sorted. ERROR: /Users/shleifer/transformers_fork/templates/adding_a_new_example_script/utils_xxx.py Imports are incorrectly sorted. ERROR: /Users/shleifer/transformers_fork/src/transformers/__init__.py Imports are incorrectly sorted. make: *** [quality] Error 1 ``` relevant packages: ```python flake8==3.8.1 isort==4.3.21 black==19.10b0 ``` Env: ``` - `transformers` version: 2.11.0 - Platform: Darwin-19.4.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) ``` Would also be good to add more verbose error messages if possible
06-05-2020 18:15:26
06-05-2020 18:15:26
@LysandreJik @julien-c any ideas?<|||||>Works for me, no issues. But `isort==4.3.21` is not precise enough, you need to have the actual precise commit. Can you try `pip uninstall isort && pip install git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort` ?<|||||>That worked! but my pip freeze still says ``` isort==4.3.21 ``` which seems like the reason that ```bash pip install -e .["dev"] ``` didn't work.<|||||>pip is confusing, but you technically have version "4.3.21" of isort if you install from the specified commit – but not **the** version "4.3.21". i.e. the version number in the setup.py of the package that you install from git is still the string "4.3.21". Do you see what I mean?<|||||>Yes I think I do, rephrase: there are multiple different versions of isort called 4.3.21 and pip install -e . will be satisfied if you have any of them, so if you have the wrong one you have to manually run ```bash pip uninstall isort pip install git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort ```<|||||>Yep