repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
11,519
closed
RoBERTa adds two sep tokens
## Environment info - `transformers` version: 4.5.1 - Platform: Linux-5.4.0-72-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.6 - PyTorch version (GPU?): NA - Tensorflow version (GPU?): NA - Using GPU in script?: NA - Using distributed or parallel set-up in script?: NA ## Information I'm using RoBERTa. I noticed when I pass a text pair, two <eos> tokens are added - see the linked code https://github.com/huggingface/transformers/blob/f37f2adb68b186f175a81a870cc526349385b9a8/src/transformers/models/roberta/tokenization_roberta_fast.py#L230 This differs to the BERT implementation https://github.com/huggingface/transformers/blob/60d5bda4fd0381075a300dc11903c76df694bd1c/src/transformers/models/bert/tokenization_bert_fast.py#L255 Is this intentional? I'm trying to create a hybrid BERT/RoBERTa style training strategy. I want to pass two sentances but I don't want to use NSP so I was hoping to use my existing custom RoBERTa tokenizer. I ran into two issues - this, and the fact that as per the comment "RoBERTa does not make use of token type ids, therefore a list of zeros is returned." - this comment also doesn't appear correct, from what I can see Roberta's Tokenizer simply does not return `token_type_ids` I've not figured out why yet EDIT: seems the default for `return_token_type_ids` is different for BERT (true) and RoBERTa (false). Also as far as I can see RoBETa will use `token_type_ids` if they're provided, It's just that the tokeniser has been coded to return all zeros. https://github.com/huggingface/transformers/blob/f37f2adb68b186f175a81a870cc526349385b9a8/src/transformers/models/roberta/modeling_roberta.py#L79
04-30-2021 07:13:23
04-30-2021 07:13:23
> Is this intentional? Yes. It is in line with the original implementation. (Check [link](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) for more information). > I ran into two issues - this, and the fact that as per the comment "RoBERTa does not make use of token type ids, therefore a list of zeros is returned." - this comment also doesn't appear correct, from what I can see Roberta's Tokenizer simply does not return token_type_ids I've not figured out why yet The comment is also correct. The original RoBERTa doesn't even have a token_type layer and the huggingface Roberta has one which is just full of zeros (i.e. does nothing as long as you don't do something with it) and only exists due to legacy reasons (check #2871). <|||||>Thanks, it appears I can put a BERT tokeniser in front of a RoBERTa mlm model and get why I want.<|||||>You can put every tokenizer in front of RoBERTa, but when you use the pre-trained weights you should stick to the original one as it will otherwise lead to garbage. :) <|||||>Yeah I understand that but I'm training from scratch.
transformers
11,518
closed
BART summarization, tokenizer not working
@patil-suraj When I am running pytorch/summarization, the logs are as below: ``` Adding AddedToken(content='<s>', single_word=False, lstrip=False, rstrip=False, normalized=True) to the vocabulary Adding AddedToken(content='</s>', single_word=False, lstrip=False, rstrip=False, normalized=True) to the vocabulary Adding AddedToken(content='<pad>', single_word=False, lstrip=False, rstrip=False, normalized=True) to the vocabulary Adding AddedToken(content='<mask>', single_word=False, lstrip=True, rstrip=False, normalized=True) to the vocabulary Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. https://huggingface.co/facebook/bart-large-cnn/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /home/sdiaoaa/.cache/huggingface/transformers/tmpama_iuh8 ``` But the embedding did not fit for the added tokens. The embedding is still [50264] but the token_ides ranges from [0, 50269]
04-30-2021 02:49:51
04-30-2021 02:49:51
Did you resize the embeddings after adding the new tokens ? To resize the embedding ```python3 model.resize_token_embeddings(len(tokenizer)) ```<|||||>> Did you resize the embeddings after adding the new tokens ? To resize the embedding > > ```python > model.resize_token_embeddings(len(tokenizer)) > ``` Thanks so much for your reply! The issue has been solved by adding that line. Just curious, is it a common practice to add 'resize_token_emb' function? because previously, I have not seen this happens and was wondering why it is not included in the official run_summarization code. Thanks!<|||||>Yes, the embeddings need to be resized after adding new tokens. And yes, you are right, the embedding should be resized in the example.
transformers
11,517
closed
rag import not on windows
just a small fix to use automodels on windows faiss is not available on windows, and RAG is using faiss, so the import fails
04-29-2021 18:10:56
04-29-2021 18:10:56
There's a lot of changes relative to `black` - could you install the version the repo uses with: ``` pip install -U -e .[quality] ``` ? Also I would put this behind a `if is_faiss_available():` rather than a platform check. Could you show the error you obtain on Windows when using an auto model? <|||||> > Also I would put this behind a `if is_faiss_available():` rather than a platform check. Could you show the error you obtain on Windows when using an auto model? how should that looks like ? putting faiss import into try catch block ? the error is an "dll not found error" on windows ``` from transformers import ViTFeatureExtractor, ViTForImageClassification, AutoModel, AutoTokenizer File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\transformers\__init__.py", line 2487, in __getattr__ return super().__getattr__(name) File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\transformers\file_utils.py", line 1700, in __getattr__ value = getattr(module, name) File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\transformers\file_utils.py", line 1699, in __getattr__ module = self._get_module(self._class_to_module[name]) File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\transformers\models\auto\__init__.py", line 198, in _get_module return importlib.import_module("." + module_name, self.__name__) File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\transformers\models\auto\modeling_auto.py", line 199, in <module> from ..rag.modeling_rag import ( # noqa: F401 - need to import all RagModels to be in globals() function File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\transformers\models\rag\modeling_rag.py", line 29, in <module> from .retrieval_rag import RagRetriever File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\transformers\models\rag\retrieval_rag.py", line 42, in <module> import faiss File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\faiss\__init__.py", line 17, in <module> from .loader import * File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\faiss\loader.py", line 39, in <module> from .swigfaiss import * File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\faiss\swigfaiss.py", line 13, in <module> from . import _swigfaiss ImportError: DLL load failed while importing _swigfaiss: Das angegebene Modul wurde nicht gefunden. ```<|||||>What is your transformer version? I see this in your stack-trace: ``` File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\transformers\models\rag\retrieval_rag.py", line 42, in <module> import faiss ``` But this should be behind the `is_faiss_available()` statement: https://github.com/huggingface/transformers/blob/db9dd09cf9d8f5de9a5293ec16e7b3d0c01dcbbb/src/transformers/models/rag/retrieval_rag.py#L31-L38<|||||>latest release and master branche then it looks like enviroment bug, I dont know which library did, but my pip freeze tells me faiss-cpu is installed on my notebook. I removed and now it's working again, so closing this PR
transformers
11,516
closed
Run_summarization not working for mbart50
- `transformers` 4.5.0 - Platform: linux: - Python version: 1.7.1 - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help @patil-suraj @LysandreJik Models: mbart I am running the run_summarization.py class using below commands: python examples/pytorch/summarization/run_summarization.py --model_name_or_path facebook/mbart-large-50 --do_train --do_eval --do_predict --test_file /home/aniruddha/mbart/mbart_json/bendev_mbart.json --train_file /home/aniruddha/mbart/mbart_json/bentrain_mbart.json --validation_file /home/aniruddha/mbart/mbart_json/bendev_mbart.json --text_column text --summary_column summary --output_dir mbart50_bengali-summarization --per_device_train_batch_size=1 --per_device_eval_batch_size=2 --overwrite_output_dir true --source_prefix "summarize: " --predict_with_generate yes My dataset in json below format: I am doing it for bengali language: {"text": "I'm sitting here in a boring room. It's just another rainy Sunday afternoon. I'm wasting my time I got nothing to do. I'm hanging around I'm waiting for you. But nothing ever happens. And I wonder", "summary": "I'm sitting in a room where I'm waiting for something to happen"} Error: File "/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 295, in convert_ids_to_tokens index = int(index) TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
04-29-2021 17:24:47
04-29-2021 17:24:47
Hi @Aniruddha-JU Right now the `run_summarization.py` does not support fine-tuning mBART for summarization, we need to set the proper language tokens for mBART50. For now, you could easily modify the script to adapt it for mBART50 by setting the correct language tokens, as is done in the translation example. https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/run_translation.py#L340-L380 The difference here would be that the source and target language will be similar. Also, could you please post the full stack trace the error seems unrelated to mBART.<|||||>All the weights of MBartForConditionalGeneration were initialized from the model checkpoint at facebook/mbart-large-50. If your task is similar to the task the model of the checkpoint was trained on, you can already use MBartForConditionalGeneration for predictions without further training. 0%| | 0/3 [00:00<?, ?ba/s] Traceback (most recent call last): File "run_summarization.py", line 596, in <module> main() File "run_summarization.py", line 428, in main train_dataset = train_dataset.map( File "/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1474, in map return self._map_single( File "/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 174, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/datasets/fingerprint.py", line 340, in wrapper out = func(self, *args, **kwargs) File "/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1798, in _map_single batch = apply_function_on_filtered_inputs( File "/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1706, in apply_function_on_filtered_inputs function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "run_summarization.py", line 409, in preprocess_function with tokenizer.as_target_tokenizer(): File "/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/contextlib.py", line 113, in __enter__ return next(self.gen) File "/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/transformers/models/mbart/tokenization_mbart50_fast.py", line 210, in as_target_tokenizer self.set_tgt_lang_special_tokens(self.tgt_lang) File "/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/transformers/models/mbart/tokenization_mbart50_fast.py", line 235, in set_tgt_lang_special_tokens prefix_tokens_str = self.convert_ids_to_tokens(self.prefix_tokens) File "/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 295, in convert_ids_to_tokens index = int(index) TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'<|||||>@patil-suraj <|||||>For translation json format is not supporting. core-dumped is happening.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||> with self.tokenizer.as_target_tokenizer(): File "/home/rahulpal/anaconda3/envs/rebel/lib/python3.7/contextlib.py", line 112, in __enter__ return next(self.gen) File "/home/rahulpal/anaconda3/envs/rebel/lib/python3.7/site-packages/transformers/models/mbart50/tokenization_mbart50_fast.py", line 215, in as_target_tokenizer self.set_tgt_lang_special_tokens(self.tgt_lang) File "/home/rahulpal/anaconda3/envs/rebel/lib/python3.7/site-packages/transformers/models/mbart50/tokenization_mbart50_fast.py", line 240, in set_tgt_lang_special_tokens prefix_tokens_str = self.convert_ids_to_tokens(self.prefix_tokens) File "/home/rahulpal/anaconda3/envs/rebel/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 307, in convert_ids_to_tokens index = int(index) TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
transformers
11,515
closed
Issues with TFGPT2ForSequenceClassification
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: Google Colab - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: NO, but tf automatically use it - Using distributed or parallel set-up in script?: ### Who can help @patrickvonplaten, @LysandreJik, @Rocketknight1 ## Information Model I am using (GPT2): The problem arises when using: * [ ] my own modified scripts: (give details below) When using TFGPT2ForSequenceClassification, I found that the structure of the model is weird, see below: ![image](https://user-images.githubusercontent.com/38811872/116583073-95d4fd80-a8db-11eb-8868-be42e11cdf08.png) Why is the classifier inserted before the GPT main layer? And when I load the PyTorch version, it looks different (inserted after the main layer): ![image](https://user-images.githubusercontent.com/38811872/116583497-08de7400-a8dc-11eb-945b-2477be9aa852.png) Also, I tried to train this model as the tutorials of [fine-tuning on Bert with customized dataset](https://huggingface.co/transformers/custom_datasets.html) suggests, but failed as following, I loaded the pretrained classification model with 3 classes: ValueError: in user code: /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function * return step_function(self, iterator) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step ** outputs = model.train_step(data) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:758 train_step self.compiled_metrics.update_state(y, y_pred, sample_weight) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:408 update_state metric_obj.update_state(y_t, y_p, sample_weight=mask) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/utils/metrics_utils.py:90 decorated update_op = update_state_fn(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/metrics.py:177 update_state_fn return ag_update_state(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/metrics.py:618 update_state ** matches = ag_fn(y_true, y_pred, **self._fn_kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper return target(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/metrics.py:3315 sparse_categorical_accuracy return math_ops.cast(math_ops.equal(y_true, y_pred), K.floatx()) /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper return target(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/math_ops.py:1679 equal return gen_math_ops.equal(x, y, name=name) /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/gen_math_ops.py:3179 equal name=name) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/op_def_library.py:750 _apply_op_helper attrs=attr_protos, op_def=op_def) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:592 _create_op_internal compute_device) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:3536 _create_op_internal op_def=op_def) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:2016 __init__ control_input_ops, op_def) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:1856 _create_c_op raise ValueError(str(e)) ValueError: Dimensions must be equal, but are 3 and 512 for '{{node Equal}} = Equal[T=DT_FLOAT, incompatible_shape_error=true](Cast_1, Cast_2)' with input shapes: [?,3], [?,512]. The tasks I am working on is: * [ ] my own task or dataset: (give details below) The task is a multi-label classification task, where the label of each sample could be represented as a 3-dim vector like [0,0,0], [0,1,0], [1,1,0], etc. ## To reproduce Steps to reproduce the behavior: 1. load the GPT2Tokenizer, TFGPT2ForSequenceClassification with num_labels=3 ``` my_gpt_tokenizer = GPT2TokenizerFast.from_pretrained('openai-gpt') my_gpt_model = TFGPT2ForSequenceClassification.from_pretrained('openai-gpt',num_labels=3) ``` 2. add pad token to the tokenizer, tokenize the text as the tutorials did and transfer them into dataset objects ``` my_gpt_tokenizer.add_special_tokens({'pad_token': '[PAD]'}) gpt_train_encodings = my_gpt_tokenizer(X_train, truncation=True, padding=True) gpt_test_encodings = my_gpt_tokenizer(X_test, truncation=True, padding=True) gpt_train_dataset = tf.data.Dataset.from_tensor_slices((dict(gpt_train_encodings),y_train)) gpt_test_dataset = tf.data.Dataset.from_tensor_slices((dict(gpt_test_encodings),y_test)) ``` 3. train the model: ``` optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5) my_gpt_model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=['accuracy']) history = my_gpt_model.fit(gpt_train_dataset.shuffle(500).batch(10), epochs=2, batch_size=10, validation_data=gpt_test_dataset.batch(10)) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The model should be trained successfully as the Bert classification does. I tried the same code on TFBertForSequenceClassification and TFDistilBertForSequenceClassification, which are all successful. <!-- A clear and concise description of what you would expect to happen. -->
04-29-2021 17:08:09
04-29-2021 17:08:09
Thanks for the very in-detail issue description! @Rocketknight1 do you maybe want to give it a try here? Otherwise I'm happy to take a look :-)<|||||>Taking a look now!<|||||>Hi @cytwill, can you share a few lines of the data you're loading as X_train and y_train? If it's a private dataset, you can replace the text with random text - I just want to see the format of the data and try to reproduce the error here.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi. I am currently experiencing the same issue as the OP where the classification layer seems to be inserted before the main GPT layer. I basically have the same model summary and a similar error so I thought I'd try to reopen this. I know it's not an ideal dataset for the model but here's a copy of the Fine Tuning with Keras tutorial to illustrate the problem: https://colab.research.google.com/drive/1UJdB5QG_6L1qeWxM8Fa-CuDZQR32cshL?usp=sharing Below the tensorflow implementation is the pytorch version that seems to work well enough.
transformers
11,514
closed
solved coefficient issue for the TF version of gelu_fast
# What does this PR do? This PR solves a bug in the Tensorflow version of gelu_fast: the two coefficients being used to compute the approximation were swapped, making the computation inaccurate.
04-29-2021 16:22:08
04-29-2021 16:22:08
transformers
11,513
closed
Improve task summary docs
This PR makes various improvements to the [Summary of Tasks](file:///Users/hamelsmu/github/transformers/docs/_build/html/task_summary.html#named-entity-recognition) docs. Instead of providing a summary of changes at the top, I added a comment to all my changes below to give more context behind why I suggested the change. @sgugger
04-29-2021 16:14:11
04-29-2021 16:14:11
heh not sure why CI failed, just see this error message > Received "killed" signal<|||||>@sgugger sorry for the double comment (I commented on an old commit by accident), what I meant to say I made the changes you suggested, LMK if this does a good job of conveying the message!<|||||>Thanks again!
transformers
11,512
closed
Piece A
04-29-2021 16:12:50
04-29-2021 16:12:50
Could you elaborate more? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,511
closed
Fix do_eval default value in training_args.py
# What does this PR do? According to <do_eval> description, when <evaluation_strategy> is different from 'no' it will be set 'True'. But the <do_eval>'s defualt setting is None. So, this code can't be executed unless the the user set <do_eval = False>. `if self.do_eval is False and self.evaluation_strategy != IntervalStrategy.NO : self.do_eval = True ` I think it will be better to change <do_eval>'s defulat value 'None' into 'False' - How I FOUND IT. I was trying to use <training_args.do_eval> in my script. BUT it didn't worked even the <evaluation_strategy> was set to 'steps'.
04-29-2021 15:57:16
04-29-2021 15:57:16
transformers
11,510
closed
[Examples] Added support for test-file in QA examples with no trainer
# What does this PR do? ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] this was discuss in another [PR](https://github.com/huggingface/transformers/pull/11380#issuecomment-824930263) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @stas00
04-29-2021 15:48:46
04-29-2021 15:48:46
One More thing i want to mention is below code, https://github.com/huggingface/transformers/blob/ad1f7bef13f03287af00f819605d696138a5e6ec/examples/pytorch/question-answering/run_qa_no_trainer.py#L543-L547 i changed to ```python eval_dataset.set_format(type="torch", columns=["attention_mask", "input_ids"]) eval_dataloader = DataLoader(eval_dataset, collate_fn=data_collator, batch_size=args.per_device_eval_batch_size) if args.do_predict: predict_dataset.set_format(type="torch", columns=["attention_mask", "input_ids"]) ``` because somehow for local files `token_type_ids` was giving an error. at below line https://github.com/huggingface/transformers/blob/ad1f7bef13f03287af00f819605d696138a5e6ec/examples/pytorch/question-answering/run_qa_no_trainer.py#L543 For the dataset, it was working fine. When I remove the `token_type_ids` script run successfully for both part!<|||||>In the Readme.md of question-answering, i think there is a typo! in below line ``` export TASK_NAME=mrpc ```<|||||>Hi @sgugger, I have removed two columns since the eval_dataset is having following features, ``` ['attention_mask', 'example_id', 'input_ids', 'offset_mapping'] ``` and dataloader also had issue with `offset_mapping`<|||||>Hi @sgugger, There is an error in the post_processing of `examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py` ``` Traceback (most recent call last): File "transformers/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py", line 815, in <module> main() File "transformers/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py", line 746, in main prediction = post_processing_function(eval_examples, eval_dataset, outputs_numpy) File "transformers/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py", line 577, in post_processing_function prefix=stage, File "/content/transformers/examples/pytorch/question-answering/utils_qa.py", line 323, in postprocess_qa_predictions_with_beam_search feature_null_score = cls_logits[feature_index] IndexError: index 1 is out of bounds for dimension 0 with size 1 100% 9/9 [01:42<00:00, 11.35s/it] ``` It can be reproduced using this [colab](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/PostProcessingErrorInQAWithBeamSearchWithNoTrainer.ipynb), When I checked i found that `cls_logits` is coming like `tensor([-0.3511])` instead it should have length equal to number of samples (Five in this case) like `[-0.45879194 -0.46871808 -0.3622135 -0.4451167 -0.4400767 ]` <|||||>Hi @sgugger, Please let me know if the above changes don't seem fine. I think there was a typo earlier it should be like this. Please correct me if I am wrong! <|||||>Yes, thanks for catching that last problem! I believe the last thing to do is to remove the lines that reset the columns of the `eval_dataset` and `test_dataset` in the post processing ([here](https://github.com/huggingface/transformers/blob/1b0af5f4ed01c227179589722cd658d68f90be6a/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py#L736) and [there](https://github.com/huggingface/transformers/blob/1b0af5f4ed01c227179589722cd658d68f90be6a/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py#L794) in run_qa_beam_search_no_trainer, but they also are in run_qa_no_trainer)<|||||>Sure @sgugger, I forgot that, Thanks!<|||||>Thanks a lot for your work on this!<|||||>Thank you, @sgugger, for catching parts I missed in my review! Much appreciated!<|||||>Hi! There is a typo in line 794 in run_qa_no_trainer.py : end_logits = accelerator.pad_across_processes(start_logits, dim=1, pad_index=-100) which should be: end_logits = accelerator.pad_across_processes(end_logits, dim=1, pad_index=-100) I'm not sure if it has been corrected in the latest version of transformers. I guess it's still there in https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa_no_trainer.py<|||||>Hi @JiaQiSJTU , Thanks for pointing out, Let me check on it.<|||||>Hi @sgugger, Shall I fix this typo? https://github.com/huggingface/transformers/blob/ffd19ee1de36188c6208855160b5ff930caa00c0/examples/pytorch/question-answering/run_qa_no_trainer.py#L794<|||||>Yes please!
transformers
11,509
closed
I-BERT: expected str, bytes or os.PathLike object, not NoneType
Hi, I have an issue when running this code provided by the HF documentation: >>> from transformers import RobertaTokenizer, IBertForTokenClassification >>> import torch >>> tokenizer = RobertaTokenizer.from_pretrained('kssteven/ibert-roberta-base') When running this i get the following error for the tokenizer: TypeError: expected str, bytes or os.PathLike object, not NoneType
04-29-2021 14:06:34
04-29-2021 14:06:34
Hi! I believe this checkpoint does not have a slow tokenizer, only a fast tokenizer. Can you try with: ```py from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained('kssteven/ibert-roberta-base') ```<|||||>Thank you! It loaded the tokenizer without showing an error.
transformers
11,508
closed
Help understanding how to build a dataset for language as with the old TextDataset
I understand this issue should be on the Datasets library, so it's been created there https://github.com/huggingface/datasets/issues/ Hello, I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers. I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the [old TextDataset](https://github.com/huggingface/transformers/blob/master/src/transformers/data/datasets/language_modeling.py) class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator: ``` model_checkpoint = 'distilbert-base-uncased' from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) from transformers import TextDataset dataset = TextDataset( tokenizer=tokenizer, file_path="path/to/text_file.txt", block_size=512, ) ``` For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer: ``` import datasets dataset = datasets.load_dataset('path/to/text_file.txt') model_checkpoint = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) def tokenize_function(examples): return tokenizer(examples["text"]) tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) tokenized_datasets ``` So what would be the "standard" way of creating a dataset in the way it was done before? Thank you very much for the help :))
04-29-2021 13:03:37
04-29-2021 13:03:37
transformers
11,507
closed
Fine-Tuning TFGPT2LMHeadModel / What to pass to fit
I am trying to fine-tune this pretrained model on my own data and I can't seem to get the format correct for what the model would like to see as input. I am using TFGPT2LMHeadModel, GPT2Config and GPT2TokenizerFast. When I do `model.fit(x,y, epochs=EPOCHS)`, - If x and y are the outputs of tokenizing on GPT2TokenizerFast (i.e. `tokenized = tokenizer(data_list, return_tensors='tf', add_special_tokens = True, truncation=True, padding = 'longest')`), I get: `ValueError: Unsupported value type BatchEncoding returned by IteratorSpec, serialize`. I tried this because of what I saw on [this example code snippet](http://huggingface.co/transformers/model_doc/gpt2.html#tfgpt2lmheadmodel) - If instead I choose x as `np.asarray(df['input_ids'].tolist()).astype('int32')` (and y is the corresponding thing for my label data) I get `InvalidArgumentError: Incompatible shapes: [32,154] vs. [2,32,16,154]` which looks a lot closer. It seems like I have to choose the correct portions of the tokenizer output to feed to the fit function, but I am not choosing correctly. Could you please clarify this for me? I am using tensorflow 2.4.1 and transformers 4.5.1.
04-29-2021 11:56:06
04-29-2021 11:56:06
When I run code supplied by another user from [another issue](https://github.com/huggingface/transformers/issues/2439), which supposedly worked at one point in time, I get a similar dimension mismatch. Is there a golden combination of tf and transformers I am supposed to be using?<|||||>Ah, the 16 above is the batch size, which must be 16. If I create a dataset with this batch size, then my model with train with the fit function. Like `dataset = tf.data.Dataset.from_tensor_slices((x, y)) ` `dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)`<|||||>Hi, Tensorflow maintainer here! Can you paste me a minimal example that reproduces the problem? You don't have to share your data or anything, just give me a few lines with a made-up tiny dataset that I can run here to recreate the problem - it'll make it much easier for me to track it down. Alternatively, if you're loading your data from HF datasets, that's fine too.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,506
closed
[WIP] Adding DETR
# What does this PR do? I've made quite a lot of progress on implementing HuggingFace's version of [DETR](https://arxiv.org/abs/2005.12872). However, there are some remaining things to be discussed, mainly regarding `DetrFeatureExtractor`, which can be used to prepare images + annotations for the model. ## What I currently have There are 3 models defined: - `DetrModel`, which consists of a convolutional backbone + encoder-decoder Transformer, without any head on top. - `DetrForObjectDetection`, which is `DetrModel` with 2 heads on top, namely a class labels classifier and a bounding box regressor. - `DetrForSegmentation`, which is `DetrForObjectDetection` (yes you read that right, not `DetrModel`) with a mask head on top, for predicting segmentation masks. Available notebooks: - [inference notebook](https://colab.research.google.com/drive/1RWzoQHkGSfztcRcgTRcd3FJUDY4GVXVB?usp=sharing) of `DetrForObjectDetection` - [fine-tuning notebook](https://drive.google.com/file/d/1NbG_DEPh2A87bpyQYvuutFXDvczkYAJ8/view?usp=sharing) - fine-tuning `DetrForObjectDetection` on a custom dataset (balloon dataset) - obtaining very good results! - [inference notebook](https://colab.research.google.com/drive/1P-bz2ZBPNciT86gFQTl_qiD2LVPKqrSW?usp=sharing) of `DetrForSegmentation` (panoptic segmentation) There's the feature extractor: - `DetrFeatureExtractor`, which can be used to prepare images and annotations for the model. The API is is similar to `ViTFeatureExtractor` and `DeiTFeatureExtractor`: the input are image(s) + annotation(s), and the output is `pixel_values` and `pixel_mask`. - Currently, it only supports preparing data for object detection, not for panoptic segmentation. It is based on [this code](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/coco.py#L17) in the original implementation. Given an image and annotations in COCO format, it turns the annotations into the format expected by DETR, followed by normalization + resizing of the image and corresponding annotations. ## Questions ### 1: Supporting panoptic segmentation for DetrFeatureExtractor (done) The problem is that if we also want to support panoptic segmentation, we rely on an external package named `panopticapi`, as it is used when preparing the annotations as can be seen [here](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/coco_panoptic.py#L9). I don't know how we can add this dependency, because I assume that people don't want to install this package if they want to use DetrFeatureExtractor for object detection. How can I handle an optional dependency? I think there are 2 options here: either 1) add a `task` argument to the feature extractor and raise an error if task = "panoptic segmentation" and panopticapi is not available, or 2) create two different feature extractors (one for object detection, and one for panoptic segmentation). The first option would look something like ``` if task == "panoptic_segmentation": if not is_panopticapi_available(): raise ImportError("Panopticapi is required for the feature extractor.") ``` also, this can be at the init of the feature extractor, or at the call ### 2: DetrForPanopticSegmentation `DetrForPanopticSegmentation` is a bit special in the sense that there are 2 ways to train this model, either 1) end-to-end, in which you train `DetrForObjectDetection` and the mask head altogether, or 2) in a 2-step process, in which you first train a `DetrForObjectDetection` model to predict bounding boxes + classes, and then in a second step you provide this model to DetrForPanopticSegmentation, freeze it and only train the mask head further for about 25 epochs. That's why there's a `box_model` argument in the `init` of `DetrForPanopticSegmentation`. Also, `DetrForObjectDetection` itself only uses the last feature map of the convolutional backbone, but `DetrForPanopticSegmentation` does not, it also uses layers 2, 3 and 4 of a ResNet in the mask head. There's an attribute `return_intermediate_layers` of `DetrConfig`, which should be set to `False` for `DetrForObjectDetection` and `True` for `DetrForPanopticSegmentation`. Currently, I set `config.return_intermediate_layers` to `True` no matter what at the `init` of `DetrForPanopticSegmentation`, but I don't know if hard coding this value is allowed. ### 3: timm support My implementation supports any convolutional backbone of the [timm](https://github.com/rwightman/pytorch-image-models) package. Should I add a `is_timm_available` check for the model (instead of `is_torch_available`)? Supporting only object detection would make life easier, but as DETR also obtains very good results on panoptic segmentation, it would be good to support that too. Would love to hear your opinions @sgugger @patrickvonplaten @LysandreJik @patil-suraj
04-29-2021 11:55:08
04-29-2021 11:55:08
Thanks for the review, yes you've made it a lot more clear for me now: * it's only `rgb2id` and `id2rgb` which are used from `panopticapi`, and it's only about 20 lines of code I see now. So indeed, we can just copy that code into the library (and cite the authors). * however, I think it's still useful to have a `task` attribute at the init of `DetrFeatureExtractor`, because then we can do input type checking depending on the task, and it will also be useful regarding postprocessing the outputs of DETR (the task attribute is also done in `LukeTokenizer` for example). * regarding `config.return_intermediate_layers`, there's indeed to reason for it to be user-facing anymore (it is in the original repo - but for other reasons), so let's remove that from the config.<|||||>Addressed most comments. Once the draft is done, I will create a new branch, squash all commits and open up a new PR, with the remaining comments copied. @sgugger I've also added dummies for timm. But CI doesn't seem to be happy, as timm is not installed on it.
transformers
11,505
closed
encoder decoder in transformers
Thanks for your contribution for EncoderDecoderModel I want to ask a question about the pool_layer of encoder. Generally the default 'add_pooling_layer=True' in encoder, while the output of encoder in EncoderDecoder is without pool_layer. Is my understanding correct? Now i want to add a classification layer in encoder, how should i do now? Thanks in advance
04-29-2021 11:54:47
04-29-2021 11:54:47
Hi there! It would be nice if you ask such questions on the [forum](https://discuss.huggingface.co/). Use issues to report bugs and for feature requests or anything else that can't be discussed on the forum. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,504
closed
Issue in checkpointing
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.0 - Platform: - - Python version: 3.8 - PyTorch version (GPU?): 3.7 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger ## Information Hi I am observing reloading after checkpoint does not get the same results. I searched and as mentioned here https://github.com/huggingface/transformers/issues/11323#issuecomment-822729525 , trainer currently does not save the random states to reload them as well, which is important. Could you add these info in self.state and set random states also in the trainer in the resume? that would be great thanks ## Expected behavior After resume, one should get exact same results as training the models without break.
04-29-2021 09:24:25
04-29-2021 09:24:25
transformers
11,503
closed
[Examples] Check key exists in datasets first
# What does this PR do? Correctly check the key exists before accessing it in some example scripts. I guess this is probably a mistake when writing example scripts. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? No tests since I didn't see any tests related to examples. Maybe someone could point it out for me. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Probably @bhadreshpsavani would like to review this according to the log from `git blame`. Or @sgugger, @patil-suraj as this is about examples. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-29-2021 08:50:44
04-29-2021 08:50:44
Hi @oToToT, Thanks for the PR, The changes seem valid to me. It should be as per changes in this PR. @sgugger what's your view on these changes?<|||||>Hi, @sgugger I'm not familiar with how huggingface do with pull requests, did I miss something to make it be merged? Or I just need to stay and wait it. Thanks!<|||||>Thanks for the ping! I just forgot to click the merge button 🤦
transformers
11,502
closed
Pin HuggingFace Hub dependency
There might be some breaking changes in the HuggingFace Hub library as development continues, so pin the dependency.
04-29-2021 08:45:03
04-29-2021 08:45:03
transformers
11,501
closed
Penalise n-gram repetition in generated sequences
# 🚀 Feature request As per this [forum post](https://discuss.huggingface.co/t/force-decoder-to-avoid-repetition-between-generated-sentences/625) sometimes it's helpful to have a parameter that can increase the diversity amongst different generated sentences. This can be a penalty on the number of repeated n-grams between each generated sentence. ## Motivation If a generation model is being used and `num_returned_sequences` is greater than `1` there are a number of use cases that make it beneficial to be able to have a parameter that can increase the diversity amongst generated sentences or at least avoid exact replicas. For example when trying to create different paraphrases of the same sentence or question. Example: Original Question :: What is the expected close date of the opportunity Paraphrased Questions Generated by T5:: 0: What will be the expected close date of the opportunity? 1: What is the expected closing date for the opportunity that you are considering? 2: What is the expected close date of the opportunity? 3: What is the expected close date on the opportunity? 4: When would be the expected close date of the opportunity? ## Your contribution I'm happy to submit a PR to work on this if it makes sense to add this as a feature. I would just appreciate a steer on where the best place would be to add this penalty. @patil-suraj @patrickvonplaten
04-29-2021 08:43:14
04-29-2021 08:43:14
Hi @KMFODA The `generate` method already supports penalizing n-gram repetition, this can be done by passing the argument `no_repeat_ngram_size` , if it's passed it will ensure that all n-grams of the given size will only occur once. This however does not mean that the different `num_returned_sequences` sequences will be diverse since AFAIK in beam search usually the sequences are very close to each other. You could try using beam sampling bypassing `do_sample=True` which will use sampling in each beam, which could introduce some diversity. There's also an option of using [diverse beam search](https://arxiv.org/abs/1610.02424) which can be enabled using the option `diversity_penalty`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,500
closed
not able load model from pipeline NotFound error
I am trying to load textattack ylep model in pipeline but it was saying below error ``` /home/tiru/anaconda3/envs/td-solutions/bin/python /snap/pycharm-community/236/plugins/python-ce/helpers/pydev/pydevconsole.py --mode=client --port=40643 import sys; print('Python %s on %s' % (sys.version, sys.platform)) sys.path.extend(['/home/tiru/Desktop/td-solutions', '/home/tiru/Desktop/td-solutions/td-inference-only/text_sentiment_classification/transformers_bert']) PyDev console: starting. Python 3.8.8 (default, Feb 24 2021, 21:46:12) [GCC 7.3.0] on linux from transformers import pipeline 2021-04-29 12:00:40.847050: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-04-29 12:00:40.847070: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. model = pipeline('sentiment-analysis', model='textattack/bert-base-uncased-yelp-polarity',revision="6722736") 404 Client Error: Not Found for url: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity/resolve/main/tf_model.h5 Traceback (most recent call last): File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1191, in from_pretrained resolved_archive_file = cached_path( File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/file_utils.py", line 1036, in cached_path output_path = get_from_cache( File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/file_utils.py", line 1174, in get_from_cache r.raise_for_status() File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/requests/models.py", line 943, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity/resolve/main/tf_model.h5 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<input>", line 1, in <module> File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 340, in pipeline framework = framework or get_framework(model) File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/pipelines/base.py", line 68, in get_framework model = TFAutoModel.from_pretrained(model, revision=revision) File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/models/auto/modeling_tf_auto.py", line 602, in from_pretrained return TF_MODEL_MAPPING[type(config)].from_pretrained( File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1207, in from_pretrained raise EnvironmentError(msg) OSError: Can't load weights for 'textattack/bert-base-uncased-yelp-polarity'. Make sure that: - 'textattack/bert-base-uncased-yelp-polarity' is a correct model identifier listed on 'https://huggingface.co/models' - or 'textattack/bert-base-uncased-yelp-polarity' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. ```
04-29-2021 06:35:41
04-29-2021 06:35:41
Could you try with the full revision, i.e., `672273686ecedd6fbbd5c0593b17df082ab65e31`?<|||||>same issues @LysandreJik ``` model = pipeline('sentiment-analysis', model='textattack/bert-base-uncased-yelp-polarity',revision="672273686ecedd6fbbd5c0593b17df082ab65e31") 404 Client Error: Not Found for url: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity/resolve/main/tf_model.h5 Traceback (most recent call last): File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 702, in from_pretrained local_files_only=local_files_only, File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/file_utils.py", line 1007, in cached_path local_files_only=local_files_only, File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/file_utils.py", line 1128, in get_from_cache r.raise_for_status() File "/home/tiru/anaconda3/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity/resolve/main/tf_model.h5 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py", line 2936, in pipeline framework = framework or get_framework(model) File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py", line 108, in get_framework model = TFAutoModel.from_pretrained(model, revision=revision) File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/models/auto/modeling_tf_auto.py", line 561, in from_pretrained pretrained_model_name_or_path, *model_args, config=config, **kwargs File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 711, in from_pretrained raise EnvironmentError(msg) OSError: Can't load weights for 'textattack/bert-base-uncased-yelp-polarity'. Make sure that: - 'textattack/bert-base-uncased-yelp-polarity' is a correct model identifier listed on 'https://huggingface.co/models' - or 'textattack/bert-base-uncased-yelp-polarity' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. ``` ``` >>> model = pipeline('sentiment-analysis', model='textattack/bert-base-uncased-yelp-polarity') 404 Client Error: Not Found for url: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity/resolve/main/tf_model.h5 Traceback (most recent call last): File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 702, in from_pretrained local_files_only=local_files_only, File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/file_utils.py", line 1007, in cached_path local_files_only=local_files_only, File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/file_utils.py", line 1128, in get_from_cache r.raise_for_status() File "/home/tiru/anaconda3/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity/resolve/main/tf_model.h5 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py", line 2936, in pipeline framework = framework or get_framework(model) File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py", line 108, in get_framework model = TFAutoModel.from_pretrained(model, revision=revision) File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/models/auto/modeling_tf_auto.py", line 561, in from_pretrained pretrained_model_name_or_path, *model_args, config=config, **kwargs File "/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 711, in from_pretrained raise EnvironmentError(msg) OSError: Can't load weights for 'textattack/bert-base-uncased-yelp-polarity'. Make sure that: - 'textattack/bert-base-uncased-yelp-polarity' is a correct model identifier listed on 'https://huggingface.co/models' - or 'textattack/bert-base-uncased-yelp-polarity' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. ````<|||||>Ah, I think the error message could be clearer here; what I understand from this error is that you don't have PyTorch installed in your environment - only TensorFlow; however, that model does not have a TensorFlow checkpoint uploaded to the hub by `textattack`, only a PyTorch variant, so it fails at loading it. Could you install torch in your environment so as to benefit from the torch model? `pip install torch` Running your code should work fine after this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,499
closed
[DeepSpeed] fp32 support
Things we need to sync with the upcoming `deepspeed==0.3.16` release: - `zero.Init` now takes a config as an argument - fp32-support integration, plus doc and tests - start troubleshooting section ### Future TODO will probably do in the next PR: - switch `from_config()` to perform the same `zero.Init` as `from_pretrained` + add test. ### Blocking events PRs waiting to be merged before this PR can be merged: - [x] https://github.com/microsoft/DeepSpeed/pull/1008 `zero.Init(config=ds_config)` new arg - [x] https://github.com/microsoft/DeepSpeed/pull/1004 fp32 support - [x] new release is needed 0.3.16 @sgugger
04-28-2021 22:41:36
04-28-2021 22:41:36
That's correct. Earlier I tried to use `cur_version>pre_version` so it'd already work with master version, but then people were reporting bugs because they were on some older master version, so while this is less convenient, it avoids invalid bug reports and saves time to all ;)
transformers
11,498
closed
[Flax] Add docstrings & model outputs
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds docstring examples & `ModelOutputs` to Flax. This includes `all_hidden_states`. The code necessary for `all_attentions` is added as well, but it requires a change in the official Flax library. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-28-2021 19:39:46
04-28-2021 19:39:46
transformers
11,497
closed
Reformat to make code clearer in tokenizer call
# What does this PR do? While reviewing a PR on the tokenizer call this morning, I had some trouble parsing what was happening, as black reformatted the tests in a way that is quite unreadable (IMO). This PR fixes that. No logic is changed, it's just put in a more human-readable way (again maybe this is just me).
04-28-2021 19:29:42
04-28-2021 19:29:42
"unuglify" -> best branch name ever haha
transformers
11,496
closed
Update TF text classification example
This updates the TF text classification example with several refactors, as well as multi-GPU and TPU support. It's late so I'd like to do one more pass over everything before merging tomorrow, but I'm opening for reviews before I head off for the evening!
04-28-2021 18:43:14
04-28-2021 18:43:14
transformers
11,495
closed
mbart encoder decoder model
Hi, I've been following [this](https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing) to implement a bert2bert seq2seq model which works pretty well. Now I would like to change this to mbart (facebook/mbart-large-50) instead of bert. I'm very new to this, but my assumption was that the same script should probably work for other transformers. So I didn't change much, just initialized the tokenizer and also the model's encoder and decoder with mbart, however, I get the following error when passing the data to the bart2bart model during training: > File "/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/python3.7/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 442, in forward encoder_hidden_states=encoder_outputs.hidden_states, AttributeError: 'Seq2SeqModelOutput' object has no attribute 'hidden_states' I'm probably making an obvious mistake but I'm not sure if I understand what the problem is and how I can fix it. Thanks
04-28-2021 17:01:13
04-28-2021 17:01:13
Hey, Could you leave more details: - Your environment - Your code(or edited so i can try it) Looking here: https://huggingface.co/transformers/model_doc/mbart.html?highlight=config#transformers.MBartConfig It doesnt seem to be using hidden_states. Depending how you use the model, you may be grabbing its output incorrectly.<|||||>Thanks. I'm using python 3.7, torch 1.7.1 and installed transformers from the source (4.6.0.dev0). I'm following the exact implementations from [here](https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing), with minor edits: ``` model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="cs_CZ", tgt_lang="cs_CZ") ``` changed the function _process_data_to_model_inputs_ ``` def process_data_to_model_inputs(batch): # tokenize the inputs and labels inputs = tokenizer(batch['src'], padding=True, truncation=True, return_tensors="pt") with tokenizer.as_target_tokenizer(): outputs = tokenizer(batch['tgt'], return_tensors="pt", padding=True, truncation=True) labels = outputs.input_ids.clone() data = TensorDataset(torch.tensor(inputs['input_ids']), torch.tensor(inputs['attention_mask']), torch.tensor(outputs['input_ids']), torch.tensor(outputs['attention_mask']), torch.tensor(labels)) dataloader = DataLoader(data, batch_size=batch_size) return dataloader ``` and then training: ``` bart2bart = EncoderDecoderModel.from_encoder_decoder_pretrained("facebook/mbart-large-50", "facebook/mbart-large-50") for i in range(EPOCH): bart2bart.train() for step, batch in enumerate(train_data): batch = tuple(t.to(device) for t in batch) b_input_ids, b_attention_masks_enc, b_input_ids_de, b_attention_masks_de, b_labels= batch outputs = bart2bart(input_ids=b_input_ids, attention_mask=b_attention_masks_enc, labels=b_labels, decoder_input_ids=b_input_ids_de, decoder_attention_mask=b_attention_masks_de) loss, logits = outputs.loss, outputs.logits optimizer.zero_grad() bart2bart.zero_grad() loss.backward() optimizer.step() ``` I'm very new to this, so I'm probably not using the model correctly as you mentioned. But I'm not sure how to fix it.<|||||>Hey, Unfortunately i dont use torch, just tensorflow functional API. However i did note that for EncoderDecoder there can be a special configuration procedure. See here: https://huggingface.co/transformers/model_doc/encoderdecoder.html#transformers.EncoderDecoderConfig It is possible that the default config doesn't behave well with MBart as it does with Bert(they are significantly different). Try passing in the configs for your encoder and decoder (both MBart) or load config from pretrained, there is example code in the above link. It certainly an error in what the decoder expects.<|||||>I tried this, thanks! The issue still remains though...it's not working. @patrickvonplaten any tips for using mbart for an Encoder-Decoder Model based on your example notebook for bert?<|||||>Hey, Fair enough, one last thing id note is: "The EncoderDecoderModel can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder." from https://huggingface.co/transformers/model_doc/encoderdecoder.html I am not sure BART can be used for this. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,494
closed
correct incorrect dimension comment in Longformer model
This PR fixes a comment that incorrectly states the dimensions of a certain tensor in the `Longformer` model, confusing any reader trying to understand the code. The comment for the corresponding `TFLongformerX` is correct. - [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). Tagging @patrickvonplaten (Longformer)
04-28-2021 15:32:38
04-28-2021 15:32:38
transformers
11,493
closed
[Docs] remove paragraph on CI from installation instructions
Fixes #11479 @julien-c suggested to remove this paragraph in #11479 @sgugger
04-28-2021 14:20:25
04-28-2021 14:20:25
transformers
11,492
closed
Split checkpoint from model_name_or_path in examples
# What does this PR do? There is currently a problem in the way we handle resuming from checkpoint in our PyTorch example scripts with `Trainer`: if the user wants to resume form a specific checkpoint, they have to pass `--model_name_or_path checkpoint_folder` which is a bit counter-intuitive but also poses the problem that sometimes the user is passing `--model_name_or_path local_pretrained_model`. This has caused multiple issues that we tried to patch a bit as they came: - in text classification or token classification, the `local_pretrained_model` might be pretrained with a different number of labels which made the loading fail (shape mismatch) - more recently since we're not using `from_pretrained` anymore when loading the checkpoint (to keep the model as provided by the user, with potential frozen layers), the state dict of `local_pretrained_model` generally won't match: if the task is different, its head will have weights incompatible with the model at end. This PR cleans things up by adding a new training argument called `--resume_from_checkpoint`. To resume training from an explicit checkpoint, the user now has to pass `--resume_from_checkpoint checkpoint_folder` as passing `--model_name_or_path some_local_folder` now only loads the model inside the local folder as a pretrained model and doesn't look for a checkpoint. It's slightly breaking (not in the library, just the commands to run the examples) but cleaner IMO. The training argument `resume_from_checkpoint` is used as a default for the argument of the same name in the `train` method. Might fix #11485
04-28-2021 13:28:32
04-28-2021 13:28:32
Hi guys, just one comment/question from my side: instead of introducing new commandline options (allowing to resume from a checkpoint), what do you think about the following workflow: a) load the model (with AutoModel) b) check if the loaded model is from type "TokenClassification" :thinking: <|||||>> I'm also thinking - if we know we are going to resume from checkpoint for sure, then replace from_pretrained with from_config and save the double loading of the weights? I'm leaving this to your PoC on one of the seq2seq scripts if you don't mind? This PR is also kind of an urgent bug fix.<|||||>Oh, of course, I had no idea about the urgency. I will do it in another PR then.
transformers
11,491
closed
Tensorflow “Index out of bound” error when trying to use the TF Longformer transformer in a custom TF network
I am trying to adapt the longformer transformer TF model from huggingface into a bigger three class classification model, i have gotten the model to compile but i cannot run a test example on it. The model and attempted output is as below: >Environment info >transformers version: 2.4.1 >Platform: Windows 10 >Python version: python 3.8 >PyTorch version (GPU?): N/A >Tensorflow version (GPU?): 2.4.1 >Using GPU in script?: yes >Using distributed or parallel set-up in script?: no Who can help @Rocketknight1 (tensorflow) @sgugger (examples ) @patrickvonplaten (Longformer) @konstantin_doncov (very similar design in answer https://stackoverflow.com/questions/63201036/add-additional-layers-to-the-huggingface-transformers ) Information Model I am using: Longformer The problem arises when using: my own modified scripts: (give details below) ```python import tensorflow as tf import tensorflow.keras as keras from tensorflow.keras.optimizers import Adam from tensorflow.keras.models import Model from tensorflow.keras.layers import GaussianNoise,LocallyConnected2D,LocallyConnected1D,Input,MaxPooling1D,Dense,Dropout,BatchNormalization,LSTM,GRU,ConvLSTM2D,Flatten,LayerNormalization,TimeDistributed,Conv1D,Reshape,Masking from tensorflow.keras import backend as K import pathlib from tensorflow.keras.callbacks import Callback from tensorflow.keras import regularizers,callbacks import numpy as np from tensorflow.keras.layers import Concatenate from transformers import TFLongformerModel, LongformerTokenizer if __name__ == "__main__": model_longformer = TFLongformerModel.from_pretrained('longformer-base-4096',output_hidden_states=True) print(model_longformer.summary()) input_ids = tf.keras.Input(shape=(4096),dtype='int32') attention_mask = tf.keras.Input(shape=(4096), dtype='int32') opt=Adam() transformer = model_longformer([input_ids, attention_mask]) transformer_outputs = transformer[1] #sequence output print("Transformer output shape:") print(transformer_outputs.shape) #Grab the last 64 sequence entries, out of allegedly (,768). This is the bit #that causes the error to mention the number '-63' hidden_states_size = 64 hiddes_states_ind = list(range(-hidden_states_size, 0, 1)) selected_hidden_states = tf.keras.layers.concatenate(tuple([transformer_outputs[i] for i in hiddes_states_ind])) print(selected_hidden_states.shape) #array_hidden = np.asarray(selected_hiddes_states) #flatter_longformer_1 = Flatten(array_hidden) reshape_longformer_1 = Reshape((1,1,),input_shape=(49152,))(selected_hidden_states) #49152 = 64*768 rnn_cells = [tf.keras.layers.GRUCell(64,dropout=0.5,recurrent_dropout=0.25,kernel_regularizer=regularizers.l2(0.005)),tf.keras.layers.GRUCell(64,kernel_regularizer=regularizers.l2(0.005),dropout=0,recurrent_dropout=0)] stacked_gru = tf.keras.layers.StackedRNNCells(rnn_cells) gru_layer = tf.keras.layers.RNN(stacked_gru)(reshape_longformer_1) bn_merge = BatchNormalization()(gru_layer) drop_merge = Dropout(0.1)(bn_merge) dense_1 = Dense(25,kernel_regularizer=regularizers.l2(0.0))(drop_merge) #0.015 bn_dense_1 = BatchNormalization()(dense_1) drop_dense_1 = Dropout(0.1)(bn_dense_1) dense_final = Dense(3, activation = "softmax")(drop_dense_1) model = Model(inputs=[input_ids, attention_mask], outputs=dense_final) model.compile(loss="categorical_crossentropy", optimizer=opt) print(model.summary()) text_input = "Queensland detectives are investigating the death of a man after he died in hospital yesterday. 9News understands an altercation took place between the man - who lives at a unit complex in the Brisbane suburb of Stafford - and a group of friends while they were drinking last week. The altercation resulted in the man being stuck in the back of the head a number of times, with him then being rushed to hospital. The man died from the injuries in hospital yesterday." tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096") encoded_input = tokenizer(text_input, return_tensors='tf',padding='max_length',max_length=4096) model([encoded_input['input_ids'],encoded_input['attention_mask']]) ``` Which outputs: ``` Model: "tf_longformer_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= longformer (TFLongformerMain multiple 148659456 ================================================================= Total params: 148,659,456 Trainable params: 148,659,456 Non-trainable params: 0 _________________________________________________________________ None WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\ops\array_ops.py:5041: calling gather (from tensorflow.python.ops.array_ops) with validate_indices is deprecated and will be removed in a future version. Instructions for updating: The `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU. Transformer output shape: (None, 768) (49152,) Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 4096)] 0 __________________________________________________________________________________________________ input_2 (InputLayer) [(None, 4096)] 0 __________________________________________________________________________________________________ tf_longformer_model (TFLongform TFLongformerBaseMode 148659456 input_1[0][0] input_2[0][0] __________________________________________________________________________________________________ tf.__operators__.getitem (Slici (768,) 0 tf_longformer_model[0][14] __________________________________________________________________________________________________ tf.__operators__.getitem_1 (Sli (768,) 0 tf_longformer_model[0][14] __________________________________________________________________________________________________ EDITED OUT ANOTHER 62 SIMILAR LAYERS __________________________________________________________________________________________________ tf.__operators__.getitem_63 (Sl (768,) 0 tf_longformer_model[0][14] __________________________________________________________________________________________________ concatenate (Concatenate) (49152,) 0 tf.__operators__.getitem[0][0] tf.__operators__.getitem_1[0][0] EDITED ANOTHER 62 SIMILAR LINES tf.__operators__.getitem_63[0][0] __________________________________________________________________________________________________ reshape (Reshape) (49152, 1, 1) 0 concatenate[0][0] __________________________________________________________________________________________________ rnn (RNN) (49152, 64) 37824 reshape[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (49152, 64) 256 rnn[0][0] __________________________________________________________________________________________________ dropout_49 (Dropout) (49152, 64) 0 batch_normalization[0][0] __________________________________________________________________________________________________ dense (Dense) (49152, 25) 1625 dropout_49[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (49152, 25) 100 dense[0][0] __________________________________________________________________________________________________ dropout_50 (Dropout) (49152, 25) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (49152, 3) 78 dropout_50[0][0] ================================================================================================== Total params: 148,699,339 Trainable params: 148,699,161 Non-trainable params: 178 __________________________________________________________________________________________________ None 2021-04-29 08:53:45.368311: W tensorflow/core/framework/op_kernel.cc:1763] OP_REQUIRES failed at strided_slice_op.cc:108 : Invalid argument: slice index -63 of dimension 0 out of bounds. Traceback (most recent call last): File "c:\Automator_alpha\Just_longformer.py", line 60, in <module> model([encoded_input['input_ids'],encoded_input['attention_mask']]) File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1014, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 426, in call return self._run_internal_graph( File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 562, in _run_internal_graph outputs = node.layer(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1014, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1520, in _call_wrapper return original_call(*new_args, **new_kwargs) File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1326, in _call_wrapper return self._call_wrapper(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1358, in _call_wrapper result = self.function(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper return target(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1037, in _slice_helper return strided_slice( File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper return target(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1210, in strided_slice op = gen_array_ops.strided_slice( File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 10484, in strided_slice _ops.raise_from_not_ok_status(e, name) File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\framework\ops.py", line 6868, in raise_from_not_ok_status six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.InvalidArgumentError: slice index -63 of dimension 0 out of bounds. [Op:StridedSlice] name: model/tf.__operators__.getitem/strided_slice/ ``` I am using 4096 for the input layers as that was the input length specified in the longformer paper. I have tried using different value, not 64, i have tried iterating through the values without specifying index(with a for statement, in which the error says cannot iterate not knowing the first dimension). I am new to this and feel like i am missing something basic.
04-28-2021 13:13:30
04-28-2021 13:13:30
I think you should replace `transformer_outputs = transformer[1]` with `transformer_outputs = transformer[0]`. <|||||>> I think you should replace `transformer_outputs = transformer[1]` with `transformer_outputs = transformer[0]`. @fredo838 Thanks for the reply, yes i have tried this -- in this case the output shape is (None,4096,768) as per config. I tried taking 64 of these entries but i get the exact same error, in a slightly different format.<|||||>how about also changing `[transformer_outputs[i] for i in hiddes_states_ind]` to `[transformer_outputs[:, i] for i in hiddes_states_ind]` (that way you index the token dimension, not the batch dimension)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,490
closed
add importlib_metadata as dependency as it is required for py<3.8
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/11399 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-28-2021 12:38:08
04-28-2021 12:38:08
Shouldn't this only be installed for python versions inferior to 3.8?<|||||>@LysandreJik, thanks for reviewing my PR. I think at the moment, we cannot use selectors with `noarch` python packages. Please see warning at - https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html?highlight=preprocess-selectors#architecture-independent-packages. <|||||>I see! Let's go with it then. Thanks for the fix!<|||||>Could you open a PR against the `master` branch so that we can also apply that fix for future conda releases?<|||||>Also, are you savvy about `conda`? We're having an issue with our recent version releases: [failing suite](https://github.com/huggingface/transformers/runs/2334821335?check_suite_focus=true), do you have an idea of what might be the conflict happening here?<|||||>Thanks @LysandreJik for merging this one. Yes I will open a PR against master too. <|||||>Added PR https://github.com/huggingface/transformers/pull/11591 for master branch.<|||||>> Also, are you savvy about `conda`? We're having an issue with our recent version releases: [failing suite](https://github.com/huggingface/transformers/runs/2334821335?check_suite_focus=true), do you have an idea of what might be the conflict happening here? I think this is failing because `python` version in the build env is `3.9` and we do not have `tokenizers` for `py39` on `HuggingFace` channel. ``` $ conda search tokenizers=0.10.2 -c HuggingFace Loading channels: done # Name Version Build Channel tokenizers 0.10.2 py35_0 HuggingFace tokenizers 0.10.2 py36_0 HuggingFace tokenizers 0.10.2 py37_0 HuggingFace tokenizers 0.10.2 py38_0 HuggingFace ``` Looks like anaconda upload failed for tokenizers 3.9 - https://github.com/huggingface/tokenizers/runs/2272898351#step:8:17 I was able to build transformers locally with py38. <|||||>Ah, I thought we were already building on 3.8, that's my bad. Thanks for your help!
transformers
11,489
closed
Update README.md
Add link to code # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-28-2021 11:24:41
04-28-2021 11:24:41
transformers
11,488
closed
TFLongformerForMaskedMLM example throws ValueError "shapes are incompatible"
An official example of the `TFLongFormerX` page does not work. ## Environment info - `transformers` version: 2.4.1 - Platform: ubuntu 20.04 - Python version: python3.8 - PyTorch version (GPU?): N/A - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten (Longformer) @Rocketknight1 (tensorflow) @sgugger (maintained examples ) ## Information Model I am using: Longformer The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. `docker run -it --rm python:3.8 bash` (no gpus attached) 2. `python3 -m pip install pip --upgrade` 3. `python3 -m pip install transformers tensorflow` 4. `python3` -> launch interactive shell 5. run following lines: ``` from transformers import LongformerTokenizer, TFLongformerForMaskedLM import tensorflow as tf tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096') model = TFLongformerForMaskedLM.from_pretrained('allenai/longformer-base-4096') inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf") inputs["labels"] = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] outputs = model(inputs) # loss = outputs.loss # logits = outputs.logits ``` This throws following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1012, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/usr/local/lib/python3.8/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2140, in call loss = None if inputs["labels"] is None else self.compute_loss(inputs["labels"], prediction_scores) File "/usr/local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 158, in compute_loss reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, shape_list(logits)[2])), active_loss) File "/usr/local/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper return target(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 1831, in boolean_mask_v2 return boolean_mask(tensor, mask, name, axis) File "/usr/local/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper return target(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 1751, in boolean_mask shape_tensor[axis:axis + ndims_mask].assert_is_compatible_with(shape_mask) File "/usr/local/lib/python3.8/site-packages/tensorflow/python/framework/tensor_shape.py", line 1134, in assert_is_compatible_with raise ValueError("Shapes %s and %s are incompatible" % (self, other)) ValueError: Shapes (11,) and (9,) are incompatible ```
04-28-2021 10:30:31
04-28-2021 10:30:31
Hi! The model is working fine here, but the problem is that "[MASK]" and "Paris" are being tokenized as different numbers of tokens, which is where your shape error is coming from. Can you link me to the exact script you got this example from?<|||||>It's under this headline, here's the permalink: https://huggingface.co/transformers/model_doc/longformer.html#tflongformerformaskedlm<|||||>ah so it's probably just updating `inputs["labels"] = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] ` to `inputs["labels"] = tokenizer("The capital of [MASK] is Paris.", return_tensors="tf")["input_ids"]`, no?<|||||>I checked and you're absolutely right, the example as written does not work. I did some digging and the problem is that the mask sequence for this model is actually '\<mask\>' and not '[MASK]'. Therefore, 'Paris' actually does get correctly tokenized as one token but '[MASK]' does not get recognized as a special character and is 'spelled out' with three word-piece tokens instead. (You can see what splits the tokenizer chose by using `tokenizer.convert_ids_to_tokens()` on the tokenized inputs). The example should work if you replace '[MASK]' with '\<mask\>'. Can you try that and let me know? If it works, we can make a PR to fix this example!<|||||>So now the following example: ```from transformers import LongformerTokenizer, TFLongformerForMaskedLM import tensorflow as tf tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096') model = TFLongformerForMaskedLM.from_pretrained('allenai/longformer-base-4096') inputs = tokenizer("The capital of France is <mask>.", return_tensors="tf") inputs["labels"] = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] outputs = model(inputs) loss = outputs.loss logits = outputs.logits preds = tf.argmax(logits, axis=2) predicted_tokens = tokenizer.convert_ids_to_tokens(tf.squeeze(preds)) print("predicted_tokens: ", predicted_tokens) ``` yields: `['<s>', 'The', 'Ġcapital', 'Ġof', 'ĠFrance', 'Ġis', 'ĠParis', '.', '</s>']` So at least we're doing something right, but there's still this weird `Ġ` character on every non-first token.<|||||>Ah, yes! The Ġ character is used to indicate word breaks. If you want to see the pure string output without it, try using the `decode()` method instead of `convert_ids_to_tokens()`. Other than that, though, your example looks good! I talked with people on the team and we can't use it directly, annoyingly - the examples are all built from the same template, so we can't easily change just one. Still, we can pass some arguments to make sure our example works for Longformer in future. The relevant bit is [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_tf_longformer.py#L2080). If you'd like to try it yourself, you can submit a PR to add the argument `mask='<mask>'` to the `add_code_sample_docstrings` decorator. If that sounds like a lot of work, just let me know and I'll make the PR and credit you for spotting it!<|||||>@Rocketknight1 I added a PR (https://github.com/huggingface/transformers/pull/11559)<|||||>Closing this because we have the PR now!
transformers
11,487
closed
Importing problem
- `transformers` version: 4.5.1 - It just cannot import the version., . cannot import name 'PegasusTokenizer' from 'transformers', python 3.8
04-28-2021 08:37:54
04-28-2021 08:37:54
try the newest version 4.6.0.dev0 ![image](https://user-images.githubusercontent.com/54096137/116375420-d7b16580-a841-11eb-8d0e-e562d5dfb263.png) <|||||>Could you install `sentencepiece` and try again? The `PegasusTokenizer` is based on the `sentencepiece` library.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Iam also facing the same issue
transformers
11,486
closed
Update `PreTrainedTokenizerBase` to check/handle batch length for `text_pair` parameter
Consider the following example: ```py from transformers import AutoTokenizer, AutoModelForQuestionAnswering import torch tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad") text = r""" 🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between TensorFlow 2.0 and PyTorch. """ questions = [ "How many pretrained models are available in 🤗 Transformers?", "What does 🤗 Transformers provide?", "🤗 Transformers provides interoperability between which frameworks?" ] inp = tokenizer(text=questions, text_pair=text, add_special_tokens=True, padding=True, truncation=True, return_tensors="pt") print(inp.input_ids.shape) ``` **The error in the above example is that the parameter `text_pair` is a string, but is supposed to be a `List[str]` to match the batch size of `text`.** Currently, this silently fails because when `text_pair` is a string it is treated as an iterable causing `zip(text, text_pair)` to erroneously build the wrong inputs to the model. This PR adds the following: 1. If `text_pair` is a string but the user passes in a batch of `text`, we convert the input for them automatically (For example when you want to ask multiple questions of the same passage). 2. Adds error checking to see if the batch length of `text` matches the batch length of `text_pair` ONLY when a batch of inputs is used. @LysandreJik @sgugger
04-28-2021 07:54:34
04-28-2021 07:54:34
Thanks! Failure in the tests is unrelated (some connection problem), so merging.
transformers
11,485
closed
run_mlm.py : Missing key(s) in state_dict & Unexpected key(s) in state_dict
## Environment info - `transformers` version: 4.6.0.dev0 - Platform: Ubuntu 16.04.3 LTS - Python version: Python 3.6.13 :: Anaconda, Inc. - PyTorch version (GPU?): 1.8.1+cu102 - Tensorflow version (GPU?): - Using GPU in script?: YES - Using distributed or parallel set-up in script?: YES ### Who can help @sgugger ## Information Model I am using roberta: The problem arises when using: - [x] the official example scripts: run_mlm.py The tasks I am working on is: - [x] my own task or dataset: wikitext-2-raw-txt (https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/) ## To reproduce Steps to reproduce the behavior: I follow the example https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling When I run ``` python run_mlm.py \ --output_dir tmp/test-mlm \ --model_name_or_path roberta-base \ --do_train \ --train_file wikitext-2-raw-txt/wiki.train.txt \ --do_eval \ --validation_file wikitext-2-raw-txt/wiki.valid.txt \ --line_by_line ``` and the error occurs ``` 2021-04-28 16:18:24.068938: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 04/28/2021 16:18:25 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 4distributed training: False, 16-bits training: False 04/28/2021 16:18:25 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=tmp/test-mlm, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=IntervalStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs/Apr28_16-18-25_Devbox4, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=tmp/test-mlm, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name=length, report_to=['tensorboard', 'wandb'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, use_legacy_prediction_loop=False, push_to_hub=False, _n_gpu=4, mp_parameters=) 04/28/2021 16:18:26 - WARNING - datasets.builder - Using custom data configuration default-b1467a68ec9fe52f 04/28/2021 16:18:27 - WARNING - datasets.builder - Reusing dataset text (/home/A50442/.cache/huggingface/datasets/text/default-b1467a68ec9fe52f/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5) [INFO|configuration_utils.py:498] 2021-04-28 16:18:27,029 >> loading configuration file roberta-base/config.json [INFO|configuration_utils.py:536] 2021-04-28 16:18:27,029 >> Model config RobertaConfig { "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.6.0.dev0", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 } [INFO|configuration_utils.py:498] 2021-04-28 16:18:27,030 >> loading configuration file roberta-base/config.json [INFO|configuration_utils.py:536] 2021-04-28 16:18:27,030 >> Model config RobertaConfig { "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.6.0.dev0", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 } [INFO|tokenization_utils_base.py:1649] 2021-04-28 16:18:27,030 >> Didn't find file roberta-base/added_tokens.json. We won't load it. [INFO|tokenization_utils_base.py:1649] 2021-04-28 16:18:27,030 >> Didn't find file roberta-base/special_tokens_map.json. We won't load it. [INFO|tokenization_utils_base.py:1649] 2021-04-28 16:18:27,030 >> Didn't find file roberta-base/tokenizer_config.json. We won't load it. [INFO|tokenization_utils_base.py:1713] 2021-04-28 16:18:27,030 >> loading file roberta-base/vocab.json [INFO|tokenization_utils_base.py:1713] 2021-04-28 16:18:27,030 >> loading file roberta-base/merges.txt [INFO|tokenization_utils_base.py:1713] 2021-04-28 16:18:27,031 >> loading file roberta-base/tokenizer.json [INFO|tokenization_utils_base.py:1713] 2021-04-28 16:18:27,031 >> loading file None [INFO|tokenization_utils_base.py:1713] 2021-04-28 16:18:27,031 >> loading file None [INFO|tokenization_utils_base.py:1713] 2021-04-28 16:18:27,031 >> loading file None [INFO|modeling_utils.py:1111] 2021-04-28 16:18:27,103 >> loading weights file roberta-base/pytorch_model.bin [INFO|modeling_utils.py:1257] 2021-04-28 16:18:30,300 >> All model checkpoint weights were used when initializing RobertaForMaskedLM. [INFO|modeling_utils.py:1266] 2021-04-28 16:18:30,300 >> All the weights of RobertaForMaskedLM were initialized from the model checkpoint at roberta-base. If your task is similar to the task the model of the checkpoint was trained on, you can already use RobertaForMaskedLM for predictions without further training. 100%|██████████████████████████████████████████████████████████████████████████████████████| 37/37 [00:01<00:00, 18.82ba/s] 100%|████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 20.73ba/s] huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) [INFO|trainer.py:1027] 2021-04-28 16:18:34,809 >> Loading model from roberta-base). Traceback (most recent call last): File "run_mlm.py", line 496, in <module> main() File "run_mlm.py", line 459, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/A50442/anaconda3/envs/transformer/lib/python3.6/site-packages/transformers/trainer.py", line 1046, in train self.model.load_state_dict(state_dict) File "/home/A50442/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1224, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for RobertaForMaskedLM: Missing key(s) in state_dict: "roberta.embeddings.position_ids", "lm_head.decoder.bias". Unexpected key(s) in state_dict: "roberta.pooler.dense.weight", "roberta.pooler.dense.bias". ``` ## Expected behavior The expected behavior is that I will get a new pretrain language model based on my dataset
04-28-2021 07:29:22
04-28-2021 07:29:22
The command runs for me and according to your logs, the `Trainer` is loading a local checkpoint named `roberta-base`. Do you have a local folder named `roberta-base`? It looks like it contains a checkpoint different from the actual `roberta-base` model, which messes up and creates the error. Could you move that folder and try again?<|||||>@sgugger Yes, I create a local folder named `roberta-base`, but the `roberta-base` folder contents is download from `huggingface` (https://huggingface.co/roberta-base/tree/main) the `language-modeling` folder screenshot as shown below: ![image](https://user-images.githubusercontent.com/54096137/116496509-d599e600-a8d7-11eb-951b-400d3ca1b05d.png) the `roberta-base` folder screenshot as shown below: ![image](https://user-images.githubusercontent.com/54096137/116496554-e64a5c00-a8d7-11eb-8e75-b8dc99fe0f23.png) so i am confused...<|||||>I think it's linked to the bug #11492 is fixing. Should be merged today and then you can try on a source install!
transformers
11,484
closed
MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50") Not working
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.0 - Platform: anaconda - Python version: 3.7 - PyTorch version (GPU?): 1.1.0 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten Models: MBart50 ## Information Model I am using (Bert, XLNet ...): MBart50 The problem arises when using: Official script as in https://huggingface.co/transformers/master/model_doc/mbart.html#transformers.MBart50Tokenizer The tasks I am working on is: Official summarization task ## To reproduce Steps to reproduce the behavior: 1. from transformers import MBartForConditionalGeneration, MBart50TokenizerFast 2. model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50") Error: File "/home/aniruddha/anaconda3/envs/rupak_qg/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1066, in from_pretrained f"Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' " OSError: Unable to load weights from pytorch checkpoint file for 'facebook/mbart-large-50' at '/home/aniruddha/.cache/huggingface/transformers/66cec75cd01a09243232a4dbb6e99525d2571fd2c73870343ad4573df28f5924.e61a75127adcaf4f5c0903618b64b779413423b5f661ece62a4839582b2b850a'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ## Expected behavior The model should load correctly.
04-28-2021 03:56:36
04-28-2021 03:56:36
I can load the model without any issues, I think the issue here is the `from_pretrained` call is hitting the cache and the model is not cached properly. You could force the download by passing `force_download=True` ```python model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50", force_download=True) ```<|||||>I am also still facing the problem, can you please mention if there is verson specific<|||||>but is is successfully loading for BART-large model <|||||>You could try deleting the cache in that case.<|||||>Not happening aging, same error is coming <|||||>See this [colab](https://colab.research.google.com/drive/1ENrFbZIxmK0ZtrtUduCADEZDZtg_QKZT?usp=sharing) it uses 4.5.0 and can load mbart. <|||||>> I can load the model without any issues, I think the issue here is the `from_pretrained` call is hitting the cache and the model is not cached properly. You could force the download by passing `force_download=True` > > ```python > model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50", force_download=True) > ``` Hi, do you know how to run transformer model like t5-small, facebook/bart-large-cnn without loading pre-trained weights? When using run_summarization.py, I only want to train their original model architecture without pre-trained model. Thank you very much! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,483
closed
The performance of the huggingface QA model depend on the order in which it loads
- `transformers` version: 4.4.2 - Python version: 3.7 I am implementing a paper that I read based on the Question Answering code "run_qa.py" on huggingface. I added a few layer in the ELECTRA, and I trained and saved only the parameters for the added layer. when I evaluate, I load that parameters and the rest were initialized by parameters of the pre-trained ELECTRA model. ``` def load_cda_qa_model(args, phase, checkpoint=None): # assert phase == 'train' or phase == 'eval' config = CONFIG_CLASSES[args.model_type].from_pretrained(args.model_name_or_path) model = MODEL_FOR_QUESTION_ANSWERING[args.model_type].from_pretrained(checkpoint) tmp_electra = MODEL_FOR_QUESTION_ANSWERING['electra'].from_pretrained(args.model_name_or_path, config=config) electra_state_dict = tmp_electra.state_dict() model_state_dict = model.state_dict() for electra_key, electra_value in electra_state_dict.items(): model_state_dict[electra_key] = electra_value model.load_state_dict(model_state_dict) return model ``` the results of two cases are: ## case 1 ![1](https://user-images.githubusercontent.com/63441709/116337637-66f45400-a815-11eb-9635-33c78a42f9bd.JPG) ## case 2 ![2](https://user-images.githubusercontent.com/63441709/116337644-678cea80-a815-11eb-8167-0dad391d82c4.JPG) What I want to ask here is why the results change when the order of writing in the red and yellow parts seems to be no difference in code flow.
04-28-2021 02:37:03
04-28-2021 02:37:03
Hi there, Please use the [forum](https://discuss.huggingface.co/) to ask these types of questions, use issues to report bugs or feature requests. Thanks!<|||||>> Hi there, > > Please use the [forum](https://discuss.huggingface.co/) to ask these types of questions, use issues to report bugs or feature requests. > > Thanks! Ok I will. Thank you!
transformers
11,482
closed
[Docs] Clarify Subphrase classification?
I am going through the docs linearly and am reading [Summary of The tasks](https://huggingface.co/transformers/task_summary.html). The second section of [Sequence Classification](https://huggingface.co/transformers/task_summary.html) uses `bert-base-cased-finetuned-mrpc` to do paraphrase classification. This is a bit opaque to me, as when I go to the [model page](https://huggingface.co/bert-base-cased-finetuned-mrpc) for that particular model, it doesn't really mention this capability? How could I discover other models that have this capability? How do I verify what this model was fine-tuned on if I was searching for this information from the model hub? Is there some other documentation about this that I am missing? Just trying to understand so I can help clarify the docs. Thanks! @sgugger
04-28-2021 01:05:23
04-28-2021 01:05:23
The problem lies mostly on the model page in that case, it should show the text classification widget, not the masked LM widget. As for adding a model card or expanding its capabilities, the feedback should go on the [forums](https://discuss.huggingface.co/), this is not really an issue with transformers per se. This is a great example of where pull requests on the model hub would be useful! (cc @julien-c )<|||||>Ok thanks I'll close this issue and open an appropriate PR/Issue in other places. I'll try to find where to update the filter and I'll put the model card on my todo list. Thanks for the pointers<|||||>For anyone that finds this issue: - [here is the forum post that describes how to suggest model cards](https://discuss.huggingface.co/t/about-the-model-cards-category/2777) - For clarification `bert-base-cased-finetuned-mrpc` does indeed do masked LM but it additionally also does text classification (I tried doing both). But, it looks like models on the hub can only associated with one widget at a time? so It could be the case that models have hidden functionality, or does this particular model violate some kind of norm? <|||||>- @hamelsmu The model card for this model should a minima reference the `mrpc` dataset, though we don't have it as a standalone dataset so the way to go would be to link to `glue` instead. (right @lhoestq?) - you can also add a `tags: - paraphrase-classification ` to the YAML (tags are pretty much open) - read the doc about the model hub here http://huggingface.co/docs (should it be linked more prominently from the transformers doc?) - As this is a "legacy" model (not inside an organization), it's hard to remember who trained it and therefore could answer more questions (the original BERT authors? Someone from HF?) - To your last question, most models only have one head so one task – for simplicity we enforce this constraint of only having one widget or Inference API endpoint per model<|||||>Yes for now we have to link to `glue`. Though I've noticed that many models use the `mrpc` tag that doesn't link to glue Maybe we can define a syntax that mentions both glue (to link to the glue dataset page) and MRPC (to mention which config if the glue dataset was used). Maybe `glue/mrpc` or `glue:mrpc`. Shall I open an issue on the website repo about this @julien-c ?
transformers
11,481
closed
Fix checkpointing in SageMaker MP
# What does this PR do? The merge of the two Trainers removed the call to `optimizer.state_dict()` being made only on `dp_rank` 0 processes. This PR adds it back.
04-28-2021 00:01:56
04-28-2021 00:01:56
transformers
11,480
closed
Error In Running Predictions for run_text_classification.py
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.0 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.8.0 - PyTorch version (GPU?): 1.7.1+cpu (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @Rocketknight1 ## Information I ran training successfully with below command run_text_classification.py \ --model_name_or_path microsoft/xtremedistil-l6-h384-uncased \ --output_dir classificationoutput \ --do_train \ --train_file PreparedData.csv \ --do_eval \ --validation_file PreparedData.csv \ --num_train_epochs 100 \ --test_file PreparedData.csv train file format label,data l1, my sentence 1 l1, my sentence 2 l2, my sentence 3 l2, my sentence 4 . . . ## To reproduce Now after training i want to do some predictions so created PredictionData.csv with single column as below data my sentence 1 my sentence 2 my sentence 3 . . . Then ran the prediction as below using the model and config from the output of training %run run_text_classification.py \ --model_name_or_path C:\Users\xxxxxxxxx\classificationoutput\tf_model.h5 \ --config_name C:\Users\xxxxxxxxx\classificationoutput\config.json\ --output_dir classificationoutput \ --do_predict \ --test_file PredictionData.csv ## Got Error as below INFO:__main__:Checkpoint detected, resuming training from checkpoint in classificationoutput. To avoid this behavior, change the `--output_dir` or add `--overwrite_output_dir` to train from scratch. INFO:__main__:Training/evaluation parameters TrainingArguments(output_dir=classificationoutput, overwrite_output_dir=False, do_train=False, do_eval=None, do_predict=True, evaluation_strategy=IntervalStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs\Apr27_15-55-43_GC8SQLQ2E, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=classificationoutput, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name=length, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, _n_gpu=0, mp_parameters=) INFO:__main__:Loading a local file for test: PredictionData.csv WARNING:datasets.builder:Using custom data configuration default-5a3e83535773f703 Downloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to C:\Users\xxxxxxxxx\.cache\huggingface\datasets\csv\default-5a3e83535773f703\0.0.0\2dc6629a9ff6b5697d82c25b73731dd440507a69cbce8b425db50b751e8fcfd0... Dataset csv downloaded and prepared to C:\Users\xxxxxxxxx\.cache\huggingface\datasets\csv\default-5a3e83535773f703\0.0.0\2dc6629a9ff6b5697d82c25b73731dd440507a69cbce8b425db50b751e8fcfd0. Subsequent calls will reuse this data. --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) ~\run_text_classification.py in <module> 535 536 if __name__ == "__main__": --> 537 main() ~\run_text_classification.py in main() 350 use_auth_token=True if model_args.use_auth_token else None, 351 ) --> 352 tokenizer = AutoTokenizer.from_pretrained( 353 model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, 354 cache_dir=model_args.cache_dir, c:\python38\lib\site-packages\transformers\models\auto\tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 388 kwargs["_from_auto"] = True 389 if not isinstance(config, PretrainedConfig): --> 390 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) 391 392 use_fast = kwargs.pop("use_fast", True) c:\python38\lib\site-packages\transformers\models\auto\configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 396 """ 397 kwargs["_from_auto"] = True --> 398 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) 399 if "model_type" in config_dict: 400 config_class = CONFIG_MAPPING[config_dict["model_type"]] c:\python38\lib\site-packages\transformers\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 466 ) 467 # Load config dict --> 468 config_dict = cls._dict_from_json_file(resolved_config_file) 469 470 except EnvironmentError as err: c:\python38\lib\site-packages\transformers\configuration_utils.py in _dict_from_json_file(cls, json_file) 549 def _dict_from_json_file(cls, json_file: Union[str, os.PathLike]): 550 with open(json_file, "r", encoding="utf-8") as reader: --> 551 text = reader.read() 552 return json.loads(text) 553 c:\python38\lib\codecs.py in decode(self, input, final) 320 # decode input (taking the buffer into account) 321 data = self.buffer + input --> 322 (result, consumed) = self._buffer_decode(data, self.errors, final) 323 # keep undecoded input until the next call 324 self.buffer = data[consumed:] UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
04-27-2021 23:10:55
04-27-2021 23:10:55
Hi, thank you for the report! I'm examining it now and I'm hoping to push some updates to that script today.<|||||>I haven't been able to reproduce this issue, but I have a PR open to make modifications to the script. I'll let you know as soon as the updated version is available - would you be willing to check if the issue is still there once it is?<|||||>The script has been updated! Please let me know if you encounter the same problems.<|||||>@Rocketknight1 Thank you i am able to run predictions and it gives correct prediction for trained data.. Here is my Wish List if you can provide: I am trying to integrate run_text_classification.py to my program where i will provide it a sentence and it gives the prediction as per labelling. and my program uses that label for something useful. 1. It writes to a file the prediction result, will that be possible if I import run_text_classification.py in my program and you exposed a function which takes input list of strings for sentences to classify with all other parameters required to run. And returns list of strings with predictions in same order. That way i do not have to read a file always for the result. 2. Default fallback label: Meaning if i passed a sentence for classification, if model is not able to classify as per trained labelled data, it returns the Default fallback label set by user. Tat way i can know for which sentences i have to retrain the model @Rocketknight1 thanks in advance Rajesh Dhiman <|||||>Hm! Number 2 in particular is a fairly advanced ML topic - getting models to know which inputs they can and can't classify accurately is surprisingly hard. This is a fairly fundamental problem that people are still writing papers about, and not one we can really tackle well in an introductory example. Your suggestion for 1) is certainly possible, though, and we'll think about it! Our intention isn't to support every possible use case with the examples, though! We really just want to show one working example that shows off a lot of the features of the library, and we expect that users will have to modify the code themselves in a lot of cases.<|||||>@Rocketknight1 Hi How I can get the Confidence score in the prediction results.. i need to have that, Is there any option i can set in settings <|||||>@Rocketknight1 i got it.. Sorry it was a dumb Question.. I am a nerd..<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,479
closed
[Docs] Add Caching Example For CI?
From the [installation instructions](https://huggingface.co/transformers/installation.html#caching-models): > If you expect to be downloading large volumes of models (more than 10,000) from huggingface.co (for instance through your CI setup, or a large-scale production deployment), please cache the model files on your end. It will be way faster, and cheaper. Feel free to contact us privately, we’d love to help with this. I'm happy to write an example of how to cache with GitHub Actions. Shall I contribute this to the docs? If so, please assign the issue to me and I'll do it. If this is not a good idea, please feel free to close the issue. @sgugger
04-27-2021 22:27:54
04-27-2021 22:27:54
This was written by @julien-c so I'll let him reply on this :-)<|||||>This was written before we moved from S3 to our own Cloudfront-served repositories so we could also probably just remove that paragraph.<|||||>Interesting. Maybe it could _still_ be a good reminder as many folks would forget to do this (I know I might have and have wasted so much of my own compute!)? However, I'll open a PR to remove the paragraph if that's preferred 🙇🏽
transformers
11,478
closed
[Flax] Add FlaxBart model
# 🚀 Feature request It would be nice to implement Flax version of BART model. ## Motivation Narrow the gap in support between encoder transformers (BERT, RoBERTa, ...) and encoder-decoder models (BART,...). ## Your contribution I've been working on this now so I hope I will send a PR soon. @patrickvonplaten @sgugger
04-27-2021 16:48:29
04-27-2021 16:48:29
Wow this sounds awesome! Let me know if you need help! (I think we will need to look at the `generate()` function) together :-)<|||||>Hi @patrickvonplaten, as I indicated I've started working on `FlaxBart` which can be found on this branch https://github.com/stancld/transformers/tree/FlaxBart . So far, I've implemented `FlaxBartModel` and `FlaxBartForConditionalGeneration` with some remaining pieces to do, but it is possible to run them. As there is no official template for Flax encoder-decoder models, I've tried to follow Torch implementation of Bart and Flax implementation of Bert. Before diving deeper and finishing all the components, tests etc, could I, please, ask you to provide me with short feedback if this structure seems ok to you? Thanks a lot in advance! :) (I guess I left some redundant code there but I'm gonna polish it soon)<|||||>This sounds great :-) Thanks a lot for tackling this! Could you maybe make a [WIP] PR from your branch and ping me - this would make it a bit easier to review the code<|||||>Hey @stancld, I looked quickly and in general the PR already looks great :-) A couple of things: - we don't allow `labels` as an input argument to Flax models (and actually probably even won't do this in the future). Flax/Jax is inherently functional which means that a loss function should wrap the model forward function and not the other way around - Weight tying is done a bit differently as in PyTorch, thus we don't need ` def get_input_embeddings(self):` for now - `return_dict` is now also implemented in `FlaxBert` -> so this can be copied from there :-) => In short the design already looks great :-) I think you can open a PR & we'll discuss everything on the PR <|||||>@patrickvonplaten Thanks a lot for the feedback! I will create [WIP] PR later today :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>ping
transformers
11,477
closed
Move integrations imports before any ML framework imports
## Fixes Current transformers breaks compatibility with comet_ml because it needs to be imported before any ML frameworks (such as torch). This PR simply moves the imports earlier in the flow. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - integrations and imports: @sgugger
04-27-2021 16:10:59
04-27-2021 16:10:59
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,476
closed
Adding new argument `max_new_tokens` for generate.
# What does this PR do? This is a proposal to add a new argument `max_new_tokens` to `generate`. This include a `MaxNewTokensCriteria` that enables callers that don't know about the token length ahead (like pipelines callers) to manage more easily the length of their generated output. `max_length` is a hard to use argument for generate: - It means different things in `encoder-decoder` context and `decoder-only` context - `encoder-decoder`: max_length = max_new_tokens - 1 (in case of bos) - `decoder-only`: max_length = input_ids.shape[-1] + max_new_tokens - It is hard to understand from a pipeline point of view where `tokens` do not exist yet. This is a proposal to add a new argument `max_new_tokens` to `generate`. This include a `MaxNewTokensCriteria` which is a bit redundant with respect to `MaxLengthCriteria`. It is a consistency concern for now but debattable. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik @patrickvonplaten Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-27-2021 16:04:35
04-27-2021 16:04:35
@LysandreJik @patrickvonplaten Forgot to add you for review, my bad.
transformers
11,475
closed
Experimental symbolic tracing feature with torch.fx for BERT, ELECTRA and T5
# What does this PR do? This PR provides a function called "symbolic_trace" which enables symbolic tracing for models of the library using the new and still experimental torch.fx feature. Our models can't be symbolically traces directly using `torch.fx`, so this is wrapper function that overcomes various issues. This new feature allows to perform [many kinds of transformations to the graph](https://pytorch.org/docs/stable/fx.html). It's also needed for projects like https://github.com/flexflow/FlexFlow/ As an experiment currently only three models are supported: BERT, ELECTRA and T5 (support for other models will follow soon). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-27-2021 16:02:47
04-27-2021 16:02:47
Let's also: 1. add some basic usage doc - we can start with just docstring 2. add one test for one of the models, polish it and then see how to replicate it for other models.<|||||>My understanding was that this is experimental and as we start using this side of library we will generalize and improve things. Hence the more slack approach. Same for tests, I thought it was good to start with unique tests because the workarounds are unique and then over time as more models are ported to come up with common tests. @michaelbenayoun, one way to approach this puzzle is to create common tests for what's the same in all of them, and if something is unique to a given model then have just that tested in that model's test file. If you need help with that, please don't hesitate to ask. <|||||>Even for experimental features like model parallelism, we are using common tests. This should not be different IMO.<|||||>@sgugger, Michael merged the custom tests into common_tests and significantly simplified the mods to the models - yay! So it looks ready for your review whenever you have a chance. Thank you!<|||||>Sorry for jumping in. Out of curiosity, what is the scenario to use this symbolic tracing feature? Didn't find any example/doc... Thanks.<|||||>Well, I initially wanted this in order to be able to try https://github.com/flexflow/FlexFlow, which requires symbolic tracing - but I haven't had a chance to do so yet.<|||||>Got it, thanks for the explanation.<|||||>> Sorry for jumping in. > Out of curiosity, what is the scenario to use this symbolic tracing feature? Didn't find any example/doc... > Thanks. This would be also be helpful to quantize models using [ FX Graph Mode Quantization](https://pytorch.org/docs/stable/quantization.html?highlight=quantization) which automate the quantization process in Pytorch. <|||||>Are these updates still functional currently? As no modeling_fx_utils.py can be seen in the source code directory.
transformers
11,474
closed
can not import mbart and mT5 modeling file
@patrickvonplaten ImportError: cannot import name 'modeling_mbart'
04-27-2021 14:52:31
04-27-2021 14:52:31
Could you please post more details and a code snippet? To import a modeling use ```python from transformers.models.mbart import modeling_mbart ```
transformers
11,473
closed
Can not import modeling_mbart
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
04-27-2021 14:49:28
04-27-2021 14:49:28
Please follow the issue template.
transformers
11,472
closed
Update min versions in README and add Flax
# What does this PR do? This PR adapts the minimum versions of each backend (PyTorch was still at 1.0 and TensorFlow at 2.0), removes mention of TensorFlow 2.0 to just say TensorFlow (I think it's safe now!) and adds Jax as an official backend since we have worked the API a bit more. Fixes #11422
04-27-2021 14:01:49
04-27-2021 14:01:49
transformers
11,471
closed
Pytorch - Lazy initialization of models
🚨🚨🚨 **Breaking seeded model initialization** 🚨🚨🚨 As explained below this PR breaks seeded model initialization by default. To ensure the exact same model initialization as before, use: ```python torch.manual_seed(seed) model = BertForSequenceClassification.from_pretrained("bert-base-cased", _fast_init=False) ``` # What does this PR do? This PR implements fast initializing by only initializing weights that need to be initialized. For every model two aggressive tests are added to make sure that the new "fast" initialization initializes the weights according to the exact same distributions as the previous init scheme. IMO, it is not possible to ensure that: ```python from transformers import BertForSequenceClassification import torch torch.manual_seed(0) model = BertForSequenceClassification.from_pretrained("bert-base-cased", _fast_init=False) # this randomely inits the lm_head layer ``` yields the same results as the new "fast" init ```python torch.manual_seed(0) model = BertForSequenceClassification.from_pretrained("bert-base-cased") # this randomely inits the lm_head layer ``` since in the first case all layers are initialized so that the random number generator is called much more often, thus making it impossible to ensure identical weights initialization => compare to [this](https://discuss.pytorch.org/t/does-pytorch-change-its-internal-seed-during-training/46505/4) post to better understand why this is probably not possible. This became obvious in this PR since I had to change the random_seed for running the `run_ner.py` examples test to make it pass. I guess it is therefore better to stick to "initializing all weights" for now and only when changing to version 5.0 making this breaking change. => Guess we should discuss this @sgugger @LysandreJik @stas00 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Fixes: #9205
04-27-2021 13:22:46
04-27-2021 13:22:46
Not to impede this PR, just wanted to add that this comment might be of importance for future work in this area: https://github.com/pytorch/pytorch/issues/29523#issuecomment-831390099 And the whole issue: https://github.com/pytorch/pytorch/issues/29523 I wonder if our side needs/should support that `reset_parameters` feature in our models. Given the last comment https://github.com/pytorch/pytorch/issues/29523#issuecomment-831435863 it's unclear where it's standing. So perhaps this is something to revisit later when the dust settles on the pytorch side.<|||||>FYI, pytorch has just added `torch.nn.utils.skip_init()` to handle similar situations: https://pytorch.org/tutorials/prototype/skip_param_init.html it should appear probably around pt-1.9.1. Note that `torch.nn.utils.skip_init()` is even more efficient as it doesn't allocate any storage at all! So there is not even an overhead of creating any weights until they are loaded from state_dict. https://pytorch.org/tutorials/prototype/skip_param_init.html#implementation-details <|||||>Hey all - this introduced a pretty nasty bug for us, that took a while to figure out. Here's a case where this initialization of the `missing_keys`, which normally shouldn't matter, broke our model after version 4.6. We have a custom model that optionally initializes some of its weights from a separate module. Calling `from_pretrained` and passing this separate module used to "work", but now those weights are overwritten. See MWE: ``` from torch.nn import Linear from transformers import BertModel class MyCustomModel(BertModel): def __init__(self, config, custom_layer=None): super().__init__(config) if custom_layer is not None: self.custom_layer = custom_layer else: self.custom_layer = Linear(1024, 1024) if __name__ == "__main__": import transformers print(transformers.__version__) layer = Linear(1024, 1024) print(layer.weight.sum()) custom_model = MyCustomModel.from_pretrained('bert-base-uncased', custom_layer=layer) # used to be the same as the layer above, but it is "re-initialized" in the from_pretrained method print(custom_model.custom_layer.weight.sum()) ``` Result: ``` 4.11.3 tensor(5.9874, grad_fn=<SumBackward0>) Some weights of the model checkpoint at bert-base-uncased were not used when initializing MyCustomModel: ['cls.seq_relationship.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing MyCustomModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing MyCustomModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of MyCustomModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.custom_layer.bias', 'bert.custom_layer.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. tensor(17.4651, grad_fn=<SumBackward0>) ``` This used to "just work" in version < 4.6. Perhaps we were relying on an unintended "feature". Setting `_fast_init=False` *does* fix things, but it's a bit hacky as it only applies to the initialization of this custom module that is called upstream in our service. Additionally, we don't know what happens if we'll need to rely on this feature in the future, but it goes away. Can you comment on this? Thanks!<|||||>Hey @john-heyer, Thanks for the feedback. We indeed didn't take account the effect this would have on custom models that inherit from `transformers` models like BERT. Just to understand better, the problem was that before 4.6, the `custom_layer` was not (re-)initialized when calling `MyCustomModel.from_pretrained(...)` - however after 4.6 it was initialized twice once before calling `from_pretrained(...)` and once after calling it? <|||||>@john-heyer - I answered in-detail here: https://github.com/huggingface/transformers/issues/17370<|||||>thanks @patrickvonplaten - yes, that is correct - before 4.6 `custom_layer` was not re-initialized, and now it is! Thanks for opening the other thread. I'll follow there.<|||||>Can this be used to load pretrained params directly to the GPU models without keeping the full copy of params on CPU? E.g. https://github.com/huggingface/diffusers has `unet = UNetModel.from_pretrained("fusing/ddpm-lsun-church").to(torch_device)` while theoretically one could have unet = UNetModel.from_pretrained("fusing/ddpm-lsun-church"m device = torch_device)`<|||||>Hey @vadimkantorov, could you maybe open an issue under `diffusers` instead? :-)<|||||>Is transfomers following a different design? I assumed diffusers just copied the original design from transformers
transformers
11,470
closed
[FlaxRoberta] Add FlaxRobertaModels & adapt run_mlm_flax.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds all FlaxRobertaModels and adapts `run_mlm_flax.py` to be trainable with FlaxRoberta as well. This [roberta-base](https://huggingface.co/patrickvonplaten/norwegian-roberta-base) was pretrained as an example. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-27-2021 12:00:00
04-27-2021 12:00:00
transformers
11,469
closed
Train GPT2 with Trainer & TrainingArguments using/specifying attention_mask
Hi, I'm using Trainer & TrainingArguments to train GPT2 Model, but it seems that this does not work well. My datasets have the ids of the tokens of my corpus and the mask of each text, to indicate where to apply the attention: ``` Dataset({ features: ['attention_mask', 'input_ids', 'labels'], num_rows: 2012860 })) ``` I am doing the training with Trainer & TrainingArguments, passing my model and my previous dataset as follows. But nowhere do I specify anything about the attention_mask: ``` training_args = TrainingArguments( output_dir=path_save_checkpoints, overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size = 4, gradient_accumulation_steps = 4, logging_steps = 5_000, save_steps=5_000, fp16=True, deepspeed="ds_config.json", remove_unused_columns = True, debug = True ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, tokenizer=tokenizer, ) trainer.train() ``` How should I tell the Trainer to use this feature (attention_mask)? If you take a look at the file /transformers/trainer.py there is no reference to "attention" or "mask". Thanks in advance!
04-27-2021 11:42:08
04-27-2021 11:42:08
Hi there, If your dataset or collator returns the `attention_mask`, then you don't need to pass it separately. With `Trainer`, all the input that you want to pass to model's `forward` should be returned by the dataset/collator and it will be passed to `model.forward` by `Trainer`.<|||||>Thank you very much for your quick reply @patil-suraj Is it possible that the problem is that more than 50% of the input is padding? Could this be too much? Do you think that training more would solve it? It currently returns a minimal loss. Having such a low loss is what made me think that I was not considering the `attention_mask` and that is why the loss was low (obviously it is easy to predict 600 padding tokens xD) What I am doing now is changing the size of the input, instead of 1024 (default value) I am testing with 400 (size of the longest text in my dataset). Best regards!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,468
closed
binary classification does not work with a large amount of data
I'm trying to use binary classification, and I used a code like Minimal Start code in the simpletransformers.ai. I can get f1, tp> 0 with a small sample of my data (around 200K rows), but surprisingly, when I'm trying to apply the model to a whole dataset (2.6M rows) or less (500K rows), the evaluation of the model is not working very well. it returns mcc=0, tp=0, f1=0 . This is the case if the model works properly with fewer data and can predict correctly. my code is here: ``` from simpletransformers.classification import ClassificationModel, ClassificationArgs import pandas as pd import logging from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score as f1 import torch logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger("transformers") transformers_logger.setLevel(logging.WARNING) dataset = pd.read_csv(r"C:\Users\**.csv", encoding="utf-8")#, header=None) dataset['labels'] = (def_dataset['labels'].astype(int)) train, test = train_test_split(dataset, train_size=0.8) model_args = ClassificationArgs(num_train_epochs=1, train_batch_size=1, save_eval_checkpoints=False, save_steps=2000000, overwrite_output_dir=True, output_dir=r'C:\Users\***\test\output', save_model_every_epoch=True, ) cuda_available = torch.cuda.is_available() # Create a ClassificationModel model = ClassificationModel( "bert", "HooshvareLab/bert-fa-base-uncased", args=model_args, use_cuda=cuda_available ) # Train the model model.train_model(train) # Evaluate the model result, model_outputs, wrong_predictions = model.eval_model(test, f1=f1) ``` These are the results obtained with a semi-large amount of data(>=500K): ``` {'mcc': 0.0, 'tp': 0, 'tn': 77052, 'fp': 0, 'fn': 22948, 'auroc': 0.5, 'auprc': 0.22948, 'f1': 0.0, 'eval_loss': 1.0871533093261718} ``` and this is what I get with fewer data(200K): ``` {'mcc': 0.6321070718937202, 'tp': 6193, 'tn': 28748, 'fp': 1925, 'fn': 3134, 'auroc': 0.9218176063608271, 'auprc': 0.7718176609516296, 'f1': 0.7100028661507596, 'eval_loss': 0.31948030271530153} ``` The only difference between these two results is the size of the dataset. I'm using windows 10 and Nvidia Quadro RTX 5000 I also tried on google colab, But the problem persisted. How can I solve this problem?
04-27-2021 09:57:11
04-27-2021 09:57:11
It won't be possible for us to answer issues with another library. Also please use the forum to such [questions](https://discuss.huggingface.co/). Thank you!<|||||>Thank you for your answer. I asked this question here because the simple transformers are working on the transformers library. It is just an interface of it, so I thought my question relevant to this repo!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,467
closed
Finish Making Quick Tour respect the model object
This PR makes the following changes 1. As a follow up to #11462, finish correcting places where the tuple is mentioned instead of the model object. 2. You must import `AutoModel` as well as `TFAutoModel` for the tutorial to run correctly. 3. Cleaned up some language for readability. @sgugger
04-27-2021 05:37:36
04-27-2021 05:37:36
transformers
11,466
closed
fix docs for decoder_input_ids
# What does this PR do? Few doc fixes for `decoder_input_ids` in s2s models. Fixes #11357 Thanks, @shyrma for spotting this!
04-27-2021 05:24:50
04-27-2021 05:24:50
Thanks a lot for catching this Patrick! I corrected this for BART and mBART.
transformers
11,465
open
[resume optimization] skip loading pretrained weights on resume
This is similar to what was discussed in https://github.com/huggingface/transformers/issues/9205, which proposed not to random init weights on `from_pretrained`, but this time it's about resume - currently we load pretrained weights and immediately drop them on resume from checkpoint in Trainer. To solve this we, for example, could change examples: 1. to figure out the checkpoint immediately after we init `TrainingArguments` and just before model is created. 2. then change `from_pretrained()` API to do keep everything as is, except loading the weights from `state_dict`, if say `skip_weights_load=True` is passed: So the code becomes: ``` if training_args.do_train: if last_checkpoint is not None: checkpoint = last_checkpoint elif os.path.isdir(model_args.model_name_or_path): checkpoint = model_args.model_name_or_path else: checkpoint = None model = AutoModelForSeq2SeqLM.from_pretrained( model_args.model_name_or_path, [...], skip_weights_load=checkpoint is not None, ) if training_args.do_train: train_result = trainer.train(resume_from_checkpoint=checkpoint) ``` Any flaws in my thinking? @patrickvonplaten, @sgugger
04-27-2021 01:57:38
04-27-2021 01:57:38
This can be achieved by just doing `model = AutoModelForSeq2SeqLM.from_config(config)` when the checkpoint is not None. I don't believe it will be much faster however as your analysis in #9205 pointed out to the random initialization as being the bottleneck.<|||||>> This can be achieved by just doing model = AutoModelForSeq2SeqLM.from_config(config) when the checkpoint is not None. From here, right? https://github.com/huggingface/transformers/blob/88ac60f7b5f6d4b62245dc21653ea3d5db7d4935/src/transformers/models/auto/auto_factory.py#L362 Great idea! Then this important part would be missed: ``` with deepspeed.zero.Init(): model = cls(config, *model_args, **model_kwargs) ``` I guess I need to add it to `from_config` anyway, which would solve this part and also this won't be done: ``` model.eval() ``` but the latter is probably redundant anyway. > I don't believe it will be much faster however as your analysis in #9205 pointed out to the random initialization as being the bottleneck. For huge models every saving counts! once you start working with models like t5-11b it's excruciatingly slow to wait for things to start. Should I try one example and re-shuffle the order of the code?<|||||>Yes, we should try on one example first! Though the first step is to fix the `from_config` method of `AutoModel` :-)
transformers
11,464
closed
[DeepSpeed] ZeRO-Infinity integration: getting started and issues
[DeepSpeed ZeRO-Infinity](https://arxiv.org/abs/2104.07857) HF Integration is now available in the master branch of `transformers`. Here is a quick getting started/what's new post. ZeRO-Infinity extends ZeRO-3 by extending CPU Offload with NVMe Offload, enabling training even bigger models. And it adds various other optimizations and improvements. ## Getting started Install the latest `deepspeed` version: ``` pip install git+https://github.com/microsoft/DeepSpeed ``` You will want to be on a transformers master branch, if you want to run a quick test: ``` git clone https://github.com/huggingface/transformers cd transformers BS=4; PYTHONPATH=src USE_TF=0 deepspeed examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-small --output_dir /tmp/zero3 --overwrite_output_dir --max_train_samples 64 \ --max_eval_samples 64 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 \ --do_train --num_train_epochs 1 --per_device_train_batch_size $BS --per_device_eval_batch_size $BS \ --learning_rate 3e-3 --warmup_steps 500 --predict_with_generate --logging_steps 0 --save_steps 0 \ --eval_steps 1 --group_by_length --dataset_name wmt16 --dataset_config ro-en --source_lang en \ --target_lang ro --source_prefix "translate English to Romanian: " \ --deepspeed tests/deepspeed/ds_config_zero3.json ``` You will find a very detailed documentation here: https://huggingface.co/transformers/master/main_classes/trainer.html#deepspeed Your new config file will look like this (for ZeRO-3 as an example): ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e14, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_fp16_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` If you want to experiment with NVMe offload, please see: https://huggingface.co/transformers/master/main_classes/trainer.html#nvme-support ## Deepspeed currently runs only fp16-mixed precision While deepspeed devs [are working on the fp32 mode](https://github.com/microsoft/DeepSpeed/pull/1004), at this moment only fp16-amp-like train/eval is available. So if your model struggles under fp16/amp it will have the same struggles under deepspeed. Moreover, because deepspeed does `model.half()` forcing all weights to fp16, some models might be ready for this (under AMP things are switched dynamically to fp16 where needed). If you run into this please post a new issue and we will try to find a solution/workaround for those special cases. ## must use the latest `transformers` master If you get deepspeed errors like it doesn't know what `auto` value is, you aren't on latest `transformers` master branch, `git pull` if you already have a clone and if you installed it already update your install. ## For those who already use DeepSpeed HF integration As the integration part is evolving it has gone through a major revamp and various improvements. There are 2 important changes that you need to be aware of if you're already using DeepSpeed integration in `transformers`: 1. After this release only config params that are set to `auto` will get automatically overriden/set to the correct/recommended values, everything else is left as is. This is to avoid the previously confusing behavior of never being quite sure what gets overridden and what not despite the logger telling what it did override. The new behavior is completely unambiguous. See examples * [zero2](https://github.com/huggingface/transformers/blob/0f221d2cce751182c455295ef2c03a2c1bd3d66b/tests/deepspeed/ds_config_zero2.json) * [zero3](https://github.com/huggingface/transformers/blob/0f221d2cce751182c455295ef2c03a2c1bd3d66b/tests/deepspeed/ds_config_zero3.json) Full doc: https://huggingface.co/transformers/master/main_classes/trainer.html#shared-configuration 2. If you are using massive models and aren't using example scripts, make sure to read: Full doc: https://huggingface.co/transformers/master/main_classes/trainer.html#constructing-massive-models Everything else should work as before or better. The docs were revamped a lot too - if you find anything unclear or lacking please let me know. If you encounter any problems please post an Issue and tag `@stas00` to it. Thank you!
04-27-2021 01:29:59
04-27-2021 01:29:59
Hi @stas00, is it normal for zero3 training to take a while to get started? I haven't put in any time to investigating yet, but I updated transformers and deepspeed to the latest masters just to see if I could get them working. My simple training script (derived from the summarization example) works fine with deepspeed and the default zero2 config, but when I run the same script with the default zero3 config, training begins but hangs with the progress bar at step 0. I let it run for about half an hour before I killed the process. The quick test zero3 in your post above seems to run fine, however. Is there some initial zero3 overhead I just need to be more patient with, or do I possibly have some deeper problem? <|||||>Something is wrong then, deepspeed takes a bit longer to start than normal as it pre-allocates some memory, and extra so the first time if it needs to compile some cuda extensions, but once started it should work at the normal speed. Hanging on zero3 could indicate that you're on multi-gpu and doing some code that blocks on trying to sync with other gpus. Anything involving forward calls must be performed on all gpus participating in the process. If one of them is skipped all other gpus will block waiting for that gpu. For example, if you're doing some code that performs `if trainer.is_world_process_zero()` it could block - depending on the code. For example, saving checkpoints has to happen on all processes and not just rank0. Could you please open a separate issue and help me to reproduce the problem and then we can look at it together. To help diagnose, you can add this anywhere to your code: ``` import faulthandler faulthandler.dump_traceback_later(20, repeat=True) ``` and it'll dump bt for all threads every 20 secs. So you will be able to see where it's hanging.<|||||>Hello! I was trying out the command pasted above, but replacing the zero_optimization part from tests/deepspeed/ds_config_zero3.json with the configuration from the NVMe offload example (see link above). The error I get is: ```AssertionError: num_elems 7563520> buffer 7563328```. I got this error before as well with the Megatron example from Deepspeed, but was able to solve it by increasing the aio block_size, however this time it did not work out. I should add that I used a SSD disk, in case that's important. <|||||>Thank you for trying this new feature. This looks like a potential bug in Deepspeed. I asked @tjruwase to have a look. May be it's worthwhile to file an Issue at https://github.com/microsoft/DeepSpeed/issues if you have a few minutes? As this is definitely not an integration issue. If you do please paste the full config you were using. thank you, @thies1006 <|||||>@thies1006, thanks for reporting this issue. As @stas00 suggested, could please report this as a deepspeed issue? It would be great if you included the exact ds_config.json in the issue report. Thanks so much!<|||||>Just now there appeared this [issue](https://github.com/microsoft/DeepSpeed/issues/1033) which I guess is exactly the same case. Sorry for not posting the exact config right away. Thank you very much! Edit: Lowering "sub_group_size" from 1e14 to 1e3 solved the issue (however another one comes up, filed another issue at Deepspeed). <|||||>@thies1006, there is now a [PR ](https://github.com/microsoft/DeepSpeed/pull/1036) for the assert: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@stas00 I am not sure if this is the right forum to ask. Feel free to direct me to somewhere else Is there a standard way of cloning a partitioned parameter? The examples I have seen are usually using gather to reconstructing it into a pytorch parameter and then cloning it. <|||||>indeed, but you have to do it before you called `deepspeed.initialize` - if you do after it - Deepspeed won't know about those new parameters and all kinds of undefined behaviors/breakages will occur. You can still add/remove params after `zero.Init` context was run (if it's used), but the model needs to be complete wrt all params being in place before it's passed to `deepspeed.initialize` <|||||>@stas00 Thank you for your prompt response. so before `deepspeed.initialize` would this be a correct way of cloning a ds_module? ``` import deepspeed # ds_module is already partitioned with deepspeed.zero.GatheredParameters(list(ds_module.parameters())): new_module = copy.deepcopy(ds_module) # at this point new_module is pytorch paramter # to convert to ds module new_module = deepspeed.zero.Init(new_module) ```<|||||>I don't think this example can work, since deepspeed installs special attributes into the tensor which would be copied and point to the wrong place. You'd have to create a normal torch param and copy the data from another param, bu perhaps you can simply ask deepspeed for adding a new util that will do the right thing for you. But let's stop this discussion here as this is offtopic to this thread and not really related to `transformers` - I propose for you to start a new issue at https://github.com/microsoft/DeepSpeed and discuss it there, where the Deepspeed team will be able to answer your needs better.
transformers
11,463
closed
[model loading] don't init weights for pretrained models
Skip `_init_weights` for pretrained models since they get immediately replaced by pretrained weights. This leads to a much faster startup for huge models. Fixes: https://github.com/huggingface/transformers/issues/9205 @sgugger, @patrickvonplaten
04-27-2021 01:01:27
04-27-2021 01:01:27
> This won't work sadly, for two reasons. > > 1. First, everything in the `__dict__` of the config gets serialized when one uses `config.self_pretrained()` (which is called by `model.from_pretrained`) so any other model downloaded from the hub with a checkpoint saved after this is merged will get this attribute in the config. Then if a user instantiates a randomly-initialized model using the config, with the following code: > > > ```python > config = AutoConfig.from_pretrained("new_checkpoint_after_this_is_merge") > model = AutoModel.from_config(config) > ``` > > then the model won't be randomly initalized (at least not with `_init_weights`) since the config will have this `use_pretrained_weights`. So if I find another way to do it that doesn't taint the config then it's OK, right? (as far as config correctness goes) e.g. what if I unset this config value as soon as `model = cls()` is done? So this is sort of a "context" operation then. > 2. Then come the problem that pretrained model instantiated with `from_pretrained` does not necessarily have all weights initialized (if you discard the head to put another task-specific head) and this PR will break the way those weights are randomly initialized. > > > I sadly don't see a way around passing around a list of not-initialized weights from pretrained to the `_init_weights` function I appreciate that you could think of the edge cases. Clearly, we don't have any tests that somehow verify that the init is done correctly. I was hoping that there would be some, but these would be hard to conjure. If you feel this is a worthwhile effort, perhaps let's start coming up with examples, write tests if possible and solve those? You can throw the edge-cases at me and I will try to overcome those. Or alternatively, we provide a very easy way for users to either force the init, or if it's safer to force no-init? e.g. the staple examples could all enforce no-init and explain how to change that if the user wants to modify the example to have the original behavior? So what I'm suggesting is that instead of `from_pretrained` automatically forcing no init as I proposed in this PR, we instead have a way for a user to choose whether they want init_weights or not explicitly?<|||||>> I appreciate that you could think of the edge cases. That is not the edge case but the overwhelming majority ;-) You are mostly working with seq2seq models that don't throw away any weights when doing transfer learning, but all the basic examples fine-tuning BERT on a classification task encounter this :-) Testing the init is done properly is very difficult as those are all random weights. Testing those weights follow this distribution instead of that one is not something easily achievable. I don't think the `no_init` option is the right one: it will only work for a certain class of problems and not others, so it's not general enough. We shouldn't go for it just before it's easier to implement than the other solutions on the table.<|||||>@stas00 @sgugger that's how I would approach the problem: https://github.com/huggingface/transformers/pull/11471<|||||>OK, let's move the effort to Patrick's PR https://github.com/huggingface/transformers/pull/11471 > [...] You are mostly working with seq2seq models [...] Guilty as charged. I'm glad you guys have a much wider view than I. Thank you!
transformers
11,462
closed
update QuickTour docs to reflect model output object
Currently, the [Quick tour](https://huggingface.co/transformers/quicktour.html#) docs shows model output as tuples when you print them out. In the current version of 🤗, the user sees an object that inherits from the `ModelOutput` class. Yes, you can still access this object as a tuple, but this might be confusing for many readers, especially since this is the very first document that many people see when using 🤗 . This PR does the following things: 1. Changes code examples in the Quick Tour to show the output object, not the tuple. 2. Minor modification in the `Model Output` doc as _both_ PyTorch and Tensorflow models return an object that is an instance of a subclass of `ModelOutput`. @sgugger P.S. I am planning to go through all of the documentation very carefully like this, please let me know if there is anything along these lines that I can pay more attention to that is needed.
04-27-2021 00:27:00
04-27-2021 00:27:00
I might need some help or tips with figuring out why styling check CI is failing. I tried to debug but it is not clear to me what is wrong<|||||>Thanks, @sgugger that worked<|||||>Thank *you* for the fixes :-)
transformers
11,461
closed
T5-large FP16 produces nan in loss
## Environment info - `transformers` version: 4.6.0.dev0, commit hash: 5e04d7086803ae4a3892f4082f2835a756592c2c - Platform: Linux-4.15.0-1071-azure-x86_64-with-debian-buster-sid - Python version: 3.7.3 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ### Who can help t5: @patrickvonplaten, @patil-suraj ## Information Model I am using (Bert, XLNet ...): t5-large The problem arises when using: * [ ] the official example scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) ## To reproduce Steps to reproduce the behavior: cd examples/seq2seq CUDA_VISIBLE_DEVICES=0 PYTHONPATH=../../src USE_TF=0 ./run_translation.py \ --model_name_or_path t5-large \ --do_train --source_lang en --target_lang ro \ --source_prefix "translate English to Romanian: " \ --dataset_name wmt16 --dataset_config "ro-en" \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size 4 \ --overwrite_output_dir \ --predict_with_generate \ --num_train_epochs 1 --fp16 ## Expected behavior FP16 mode shouldn't produce nan in loss.
04-27-2021 00:14:01
04-27-2021 00:14:01
I see nans creeping in at the T5Attention in decoder. I didn't find any inf or nan in either hidden_states or key_value_states but the computed values of both key_states and value_states have nan's<|||||>> FP16 mode shouldn't produce nan in loss. Why do you believe this to be the case? This model was trained in bf16, which has a totally different numerical range from fp16. So it shouldn't produce NaNs under bf16 or fp32, but under fp16 it's almost guaranteed to not work. Please see: https://discuss.huggingface.co/t/mixed-precision-for-bfloat16-pretrained-models/5315 That's said, please try this branch https://github.com/huggingface/transformers/pull/10956 that tries to use a workaround for AMP. Some users reported success. One user reported problems. And you can also try the new over/underflow detector: https://github.com/huggingface/transformers/pull/11274 if you want to get more precise info on where the problem emerges first. Just add `--debug activation_overflow` to the trainer command line and it will bail with the traces of the last frames as soon as nan or inf is encountered. I am reworking this tool to provide more info, and need to revamp the interface, but it's mostly done. <|||||>Thank you for the pointers to the discussion. Is it just finetuning or do you expect inference to be unstable as well in fp16 mode? debug_activation_overflow looks like a great tool that can be useful in identifying the source of nans. I'll give [#10956 ](url) a try and see if it helps with my runs.<|||||>> Is it just finetuning or do you expect inference to be unstable as well in fp16 mode? There are less moving parts during inference. But more or less expect the same problems. So the workaround is to identify where under/overflow happens and force the model to perform those ops in fp32 and then convert back to fp16. In fact with finetuning if you don't have the problem happening right away like it does with mt5, you could try to stir the model into the fp16 range by punishing large activations. Please see the proposed `loss` calculation extra: https://github.com/huggingface/transformers/pull/10956#issuecomment-820712267 (it in fact comes from the original t5 implementation but for some reason wasn't implemented in that ported model in `transformers`). <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,460
closed
support batch-sampler in trainer
# 🚀 Feature request Hi Currently Trainer class only supports sampler, but not batch-sampler, (the sampling strategies based on batches, which are a group of samplers in torch), if a user wants to use batch-sampler, currently Trainer class introduces unwanted bugs, but not setting epoch for this type of samplers. If a user is careful enough, he/she still need to overwrite the whole ```train``` function, which is a big chunk of code, could you make this line a function, so the user easily overwrite it for the case of batch-sampler? This part of trainer also introduces bugs specially, if a user use user-defined samplers as well, and not being careful to set epoch for them. ``` if isinstance(train_dataloader.sampler, DistributedSampler): train_dataloader.sampler.set_epoch(epoch) ``` so the user might need to set it to: ``` train_dataloader.batch_sampler.set_epoch(epoch) ``` thanks ## Motivation supporting all type of samplers in Trainer, or making set_epoch a function so user can overwrite it nicely @sgugger
04-26-2021 21:18:30
04-26-2021 21:18:30
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,459
closed
extending metric_for_best_model to a list of strings
# 🚀 Feature request Hi Currently ` metric_for_best_model(:obj:`str`, `optional`) ` only covers one metric, for some datasets like MRPC, one have several metrics, like accuracy/F1 or like in STSB both pearson/spearman, and user might need to choose the best based on average on all metrics, it could be helpful to have this option. This is related to the trainer @sgugger thanks.
04-26-2021 20:49:26
04-26-2021 20:49:26
This could be added indeed. Note that there is already a workaround: when defining your `compute_metrics` function, you can add a new field with this average: ``` def compute_metrics(eval_preds): # Your previous metric computation metrics["combined"] = (metrics["accuracy"] + metrics["f1"]) / 2 return metrics ``` and then you can pass `--metric_for_best_model combined`. This approach is also more flexible as you can completely define the way the combination is done (so you can pick weights for your mean or do something else than the mean).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,458
closed
"Is next sentence" pre-training task availability for Language Modeling scripts
# 🚀 Feature request BERT was trained on 2 pretraining tasks. The scripts [here ](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling) in the repo for language modeling only provide scripts for masked language modeling. Is there any specific reason for this? I was hoping if those scripts could be extended to include "next sentence" pretraining task to remain faithful to the pretraining methodology used by BERT if I choose to further pretrain on some corpus. ## Motivation BERT uses 2 pre-training tasks and the scripts provide only of them. ## Your contribution I am not sure if I can extend the script myself. I will gladly look into them if I know the there aren't any good reasons for them not being provided by HF in the first place. Thanks a lot!
04-26-2021 20:23:36
04-26-2021 20:23:36
Related issues: https://github.com/huggingface/transformers/issues/1622, https://github.com/huggingface/transformers/issues/2898 and https://github.com/huggingface/transformers/issues/2166 However, example scripts are made to be very understandable and very easy to tweak, so modifying them to include the next sentence prediction objective for BERT shouldn't be complicated! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Same issue! Anybody done it before? Have been trying using Hugging face library for more than a month now. Still running into issues
transformers
11,457
closed
Can this `@slow` annotation be removed at barthez tokenizer test=
Here is a `@slow` annotation that can be removed IMO. Otherwise maybe write a comment why it is tagged as slow? https://github.com/huggingface/transformers/blob/bc2571e61c985ec82819cf01ad038342771c94d0/tests/test_tokenization_barthez.py#L27 I can provide a PR.
04-26-2021 19:47:48
04-26-2021 19:47:48
No it is intended as slow. The Barthez tokenizer fast tokenizer takes forever to load from the slow one sadly, which is why we marked those tests are slow. To be able to remove that, we must first create a small sentencepiece model that is compatible with Barthez (it is using the real tokenizer checkpoint right now) and then the tests can be marked as unslow.<|||||>Ok thanks. Closing this then.<|||||>@sgugger When do you execute the slow tests? Do you do it manually before you do a release?<|||||>They run once every day!<|||||>Ahh ok thanks.
transformers
11,456
closed
Perturb Hidden-State in Encoder-Decoder Models
Hi All, I'm fairly new at using huggingface - so I apologize if this is answered in the documentation. I've looked around and don't think this has been asked before. I'm trying to perturb and use the hidden-state of an encoder-decoder model on the summarization task. More specifically, I'd like to 1. Get a fixed-length hidden state for a given input after passing it to an encoder model, 2. perturb it, and 3. pass it to the decoder model to get an output. 1 and 2 seem straightforward. To get the hidden state I use: ` hidden_states = model.base_model.encoder(inputs).last_hidden_state` and to get a fixed length embedding I'm getting the last element of this list as per this [discussion](https://github.com/huggingface/transformers/issues/1950). Perturbing this would be as simple as adding noise to a tensor. As for the third part, I'm having difficulty using this perturbed hidden-state in the decoder to generate an output. I've looked through the code and it seems a lot of the steps are abstracted out to accommodate many kinds of models. From what I understand, we can modify the model.generate() method so that the perturbation is added. However, this doesn't necessarily work for me since I wanted to use this hidden-state for other purposes before passing it to the decoder. The other approach would be to create a separate function that takes as input the hidden-state and uses the second part of the code from model.generate() to produce outputs. Before I proceed to implement this, I was wondering if there is a simpler way using existing code to do this.
04-26-2021 17:43:44
04-26-2021 17:43:44
Hi @vin-nag If you want to directly pass `hidden_states` then you could do it this way ```python model = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_gigaword") tok = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_gigaword") article = """australian shares closed down #.# percent monday following a weak lead from the united states and lower commodity prices , dealers said .""" enc = tok(article, return_tensors="pt") hidden_states = model.encoder(**enc, return_dict=True) # perturb the last_hidden_state hidden_states.last_hidden_state = perturb(hidden_states.last_hidden_state) gen_ids = model.generate(input_ids=None, encoder_outputs=hidden_states, attention_mask=enc["attention_mask"]) tok.batch_decode(gen_ids) ```<|||||>@patil-suraj Thank you so much!<|||||>Hi @patil-suraj, thanks for the script. How can I access the `loss` after perturbation?
transformers
11,455
closed
Unable to use custom dataset: AttributeError: 'list' object has no attribute 'keys'
What am I doing wrong? I encode data with ``` model_name = "dbmdz/bert-base-italian-uncased" tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case = True) def encode_data(texts): return tokenizer.batch_encode_plus( texts, add_special_tokens=True, return_attention_mask=True, padding = True, truncation=True, max_length=200, return_tensors='pt' ) ``` Then I create my datasets with ``` import torch class my_Dataset(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = torch.tensor(labels) def __getitem__(self, idx): item = {key: val[idx] for key, val in self.encodings.items()} item['labels'] = self.labels[idx] print(item) return item def __len__(self): return len(self.labels) ``` So I have ``` encoded_data_train = encode_data(df_train['text'].tolist()) encoded_data_val = encode_data(df_val['text'].tolist()) encoded_data_test = encode_data(df_test['text'].tolist()) dataset_train = my_Dataset(encoded_data_train, df_train['labels'].tolist()) dataset_val = my_Dataset(encoded_data_val, df_val['labels'].tolist()) dataset_test = my_Dataset(encoded_data_test, df_test['labels'].tolist()) ``` Then I initiate my Trainer with ``` from transformers import AutoConfig, TrainingArguments, DataCollatorWithPadding, Trainer training_args = TrainingArguments( output_dir='/trial', learning_rate=1e-6, do_train=True, do_eval=True, evaluation_strategy='epoch', num_train_epochs=10, per_device_train_batch_size=8, per_device_eval_batch_size=8, warmup_steps=0, weight_decay=0.2, logging_dir="./logs", ) num_labels = len(label_dict) model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels = num_labels) trainer = Trainer( model=model, args=training_args, data_collator=DataCollatorWithPadding(tokenizer), tokenizer= tokenizer, train_dataset=dataset_train, eval_dataset=dataset_val, ) ``` and finally I train ``` trainer.train() ``` Here is the error I get ``` AttributeErrorTraceback (most recent call last) <ipython-input-22-5d018b4b061d> in <module> ----> 1 trainer.train() /opt/conda/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) 1032 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control) 1033 -> 1034 for step, inputs in enumerate(epoch_iterator): 1035 1036 # Skip past any already trained steps if resuming training /opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py in __next__(self) 433 if self._sampler_iter is None: 434 self._reset() --> 435 data = self._next_data() 436 self._num_yielded += 1 437 if self._dataset_kind == _DatasetKind.Iterable and \ /opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py in _next_data(self) 473 def _next_data(self): 474 index = self._next_index() # may raise StopIteration --> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 476 if self._pin_memory: 477 data = _utils.pin_memory.pin_memory(data) /opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 45 else: 46 data = self.dataset[possibly_batched_index] ---> 47 return self.collate_fn(data) /opt/conda/lib/python3.8/site-packages/transformers/data/data_collator.py in __call__(self, features) 116 117 def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: --> 118 batch = self.tokenizer.pad( 119 features, 120 padding=self.padding, /opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in pad(self, encoded_inputs, padding, max_length, pad_to_multiple_of, return_attention_mask, return_tensors, verbose) 2558 if self.model_input_names[0] not in encoded_inputs: 2559 raise ValueError( -> 2560 "You should supply an encoding or a list of encodings to this method" 2561 f"that includes {self.model_input_names[0]}, but you provided {list(encoded_inputs.keys())}" 2562 ) AttributeError: 'list' object has no attribute 'keys' ``` What I am doing wrong? I also tried using ``` import torch from torch.utils.data import TensorDataset dataset_train = TensorDataset(encoded_data_train['input_ids'], encoded_data_train['attention_mask'], torch.tensor(df_train['labels'].tolist())) dataset_test = TensorDataset(encoded_data_test['input_ids'], encoded_data_test['attention_mask'], torch.tensor(df_test['labels'].tolist())) dataset_val = TensorDataset(encoded_data_val['input_ids'], encoded_data_val['attention_mask'], torch.tensor(df_val['labels'].tolist())) ``` getting the same error. Using: torch == 1.7.1 transformers == 4.4.2 Thank you! @sgugger
04-26-2021 16:22:27
04-26-2021 16:22:27
This is really weird. Could you print a few items of your Dataset? The error means that they are not dictionaries containing `"input_ids"` but they certainly seem to be. Also note that since you already have applied padding in your preprocessing, you can use the `default_data_collator`, but the code should work nonetheless.<|||||>> Also note that since you already have applied padding in your preprocessing, you can use the `default_data_collator`, but the code should work nonetheless. Yeah, I did try commenting the line about the data_collator as well, but I got the same error. > This is really weird. Could you print a few items of your Dataset? The error means that they are not dictionaries containing `"input_ids"` but they certainly seem to be. For instance, `dataset_train.__getitem__(1)` gives me ``` {'input_ids': tensor([ 102, 2719, 10118, 19614, 784, 366, 119, 142, 17586, 113, 10885, 4019, 5129, 143, 10885, 119, 4019, 14633, 1354, 137, 917, 1621, 9048, 360, 151, 143, 784, 366, 113, 213, 7809, 985, 1941, 1702, 9580, 749, 12993, 135, 9272, 119, 1202, 1328, 2909, 7427, 2909, 483, 15079, 6766, 2201, 5754, 4213, 1266, 642, 119, 1968, 115, 7584, 7124, 2899, 9654, 151, 143, 3684, 137, 17586, 113, 3151, 113, 193, 4283, 165, 1035, 1354, 4913, 1621, 9048, 360, 137, 17586, 113, 119, 7809, 985, 1941, 1702, 1621, 9048, 360, 4913, 16829, 913, 272, 3694, 2909, 7427, 145, 1723, 20957, 15016, 213, 11171, 119, 7809, 642, 3761, 188, 164, 4706, 119, 3684, 8941, 119, 6330, 8076, 2199, 642, 23829, 22462, 30934, 4213, 1354, 2759, 311, 7809, 5434, 137, 1031, 510, 2603, 5569, 5434, 137, 1031, 510, 3732, 5569, 5434, 137, 1031, 510, 3627, 14715, 30951, 4543, 8823, 5066, 3625, 3627, 1701, 7900, 153, 5066, 3625, 3732, 7559, 127, 3732, 13703, 133, 176, 11576, 2909, 13703, 133, 1621, 9048, 360, 1723, 5230, 9580, 749, 12993, 114, 1031, 510, 387, 11993, 189, 22264, 8823, 143, 6766, 3462, 5622, 27082, 113, 7809, 3132, 1011, 189, 7825, 8823, 143, 6766, 111, 341, 7124, 2899, 18482, 103]), 'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'labels': tensor(5)} ``` Input texts are emails in italian. (the issue appears also with transformers 4.5.1)<|||||>I am unable to reproduce your bug. Are you sure your data frames don't contain a list of text in one of the line instead of just texts?<|||||>I found the mistake! I was doing something slightly different from what I wrote, namely ``` from transformers import AutoConfig, TrainingArguments, DataCollatorWithPadding, Trainer train_dataset=dataset_train, eval_dataset = dataset_val training_args = TrainingArguments( output_dir='/trial', learning_rate=1e-6, do_train=True, do_eval=True, evaluation_strategy='epoch', num_train_epochs=10, per_device_train_batch_size=8, per_device_eval_batch_size=8, warmup_steps=0, weight_decay=0.2, logging_dir="./logs", ) num_labels = len(label_dict) model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels = num_labels) trainer = Trainer( model=model, args=training_args, data_collator=DataCollatorWithPadding(tokenizer), tokenizer= tokenizer, train_dataset=train_dataset, eval_dataset=eval_dataset, ) ``` The difference is in line 3 and 4, and consequently last two lines. The mistake is the comma at the end of line 3. My bad I did not run the example code I published in the question exactly as it was. I am so sorry, and so upset to have spent a week for a stupid comma. Thanks for the help<|||||>Oh that's a nasty little bug indeed! Glad you found the problem!
transformers
11,454
closed
cannot import name 'set_seed' from 'transformers'
i run with transformers==4.5.1 and get the following error: ![image](https://user-images.githubusercontent.com/68938613/116115472-18cf3b80-a6c3-11eb-9a2d-f3c9ad1d5588.png) do you know how to resolve this issue? thanks
04-26-2021 16:11:09
04-26-2021 16:11:09
Try the following code: `from transformers.trainer_utils import set_seed` Let me know if it works!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,453
closed
Give each hub test a different repo name
# What does this PR do? To reduce flakiness in the tests using the hub and be able to investigate failures more closely, this PR gives them each one different namespace.
04-26-2021 15:38:17
04-26-2021 15:38:17
transformers
11,452
closed
wav2vec2 doesn't work with torch.distributed.launch & multi GPU
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.0.dev0 - Platform: Linux-4.15.0-140-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.8.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help @patil-suraj @elgeish @patrickvonplaten I see the readme is written by @patil-suraj and @elgeish , so any help would be appreciated. ## Information Model I am using (Bert, XLNet ...): wav2vec2 The problem arises when using: * [x] the official example scripts: (give details below) Although the fine-tuning week is over, the example is pretty useful. I am working on a voice recognition problem and want to train using distributed learning. I refer to huggingface's official example here: <https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/FINE_TUNE_XLSR_WAV2VEC2.md> ## To reproduce Steps to reproduce the behavior: 1. On a clean environment, install requirements and git clone transformers repository. 2. Run multi GPU training code as written in the readme. 3. Bug reproduces. The code is ```shell git clone https://github.com/huggingface/transformers.git cd transformers/examples/research_projects/wav2vec2/ mkdir outputs python -m torch.distributed.launch \ --nproc_per_node=4 run_common_voice.py \ --model_name_or_path="facebook/wav2vec2-large-xlsr-53" \ --dataset_config_name="tr" \ --output_dir=./outputs \ --overwrite_output_dir \ --num_train_epochs="5" \ --per_device_train_batch_size="16" \ --learning_rate="3e-4" \ --warmup_steps="500" \ --evaluation_strategy="steps" \ --save_steps="400" \ --eval_steps="400" \ --logging_steps="400" \ --save_total_limit="3" \ --freeze_feature_extractor \ --feat_proj_dropout="0.0" \ --layerdrop="0.1" \ --gradient_checkpointing \ --fp16 \ --group_by_length \ --do_train --do_eval ``` ## Error The following error occurs. ```text 0%| | 0/275 [00:00<?, ?it/s]/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/nn/modules/module.py:760: UserWarning: Using non-full backward hooks on a Module that does not return a single Tensor or a tuple of Tensors is deprecated and will be removed in future versions. This hook will be missing some of the grad_output. Please use register_full_backward_hook to get the documented behavior. warnings.warn("Using non-full backward hooks on a Module that does not return a " /home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/nn/modules/module.py:795: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior. warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes " Traceback (most recent call last): File "run_common_voice.py", line 512, in <module> main() File "run_common_voice.py", line 484, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/transformers/trainer.py", line 1118, in train tr_loss += self.training_step(model, inputs) File "run_common_voice.py", line 230, in training_step loss = self.compute_loss(model, inputs) File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/transformers/trainer.py", line 1548, in compute_loss outputs = model(**inputs) File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 692, in forward if self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). Traceback (most recent call last): File "run_common_voice.py", line 512, in <module> main() File "run_common_voice.py", line 484, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/transformers/trainer.py", line 1118, in train tr_loss += self.training_step(model, inputs) File "run_common_voice.py", line 230, in training_step loss = self.compute_loss(model, inputs) File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/transformers/trainer.py", line 1548, in compute_loss outputs = model(**inputs) File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 692, in forward if self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). Killing subprocess 25001 Killing subprocess 25002 Killing subprocess 25003 Killing subprocess 25004 Traceback (most recent call last): File "/home/aidealab/.conda/envs/hf/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/aidealab/.conda/envs/hf/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in <module> main() File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) subprocess.CalledProcessError: Command '['/home/aidealab/.conda/envs/hf/bin/python', '-u', 'run_common_voice.py', '--local_rank=3', '--model_name_or_path=facebook/wav2vec2-large-xlsr-53', '--dataset_config_name=tr', '--output_dir=/home/aidealab/workspace/transformers/examples/research_projects/wav2vec2/outputs', '--overwrite_output_dir', '--num_train_epochs=5', '--per_device_train_batch_size=16', '--learning_rate=3e-4', '--warmup_steps=500', '--evaluation_strategy=steps', '--save_steps=400', '--eval_steps=400', '--logging_steps=400', '--save_total_limit=3', '--freeze_feature_extractor', '--feat_proj_dropout=0.0', '--layerdrop=0.1', '--gradient_checkpointing', '--fp16', '--group_by_length', '--do_train', '--do_eval']' returned non-zero exit status 1. ``` ## Expected behavior It is expected the script runs without error.
04-26-2021 12:45:05
04-26-2021 12:45:05
## Error messages in short ### Warning ``` UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior. warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes " ``` This warning may not seem to be the direct reason for the crush, but I encounter this warning in my own scripts as well and ends up the training freeze. ### Unproceedable Error ``` RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). ``` And I have no idea how to solve this error.<|||||>## Non-wav2vec2 case I also looked up other examples. https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering And it worked. So it seems to me a wav2vec2 model's problem with multi GPU training.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Easiest solution at the moment will be to follow @stas00 PR at #11638 <|||||>This error most likely has to do with randomly skipping the layers in LayerDrop - so one gpu skips and another continues -and they get out of sync. Try to see if the error goes away if you disable its skipping logic and let all layers run, You can see how I did it https://github.com/huggingface/transformers/blob/c8acf9219febf534232f01ecc253034e6d3b68c3/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L643-L667 I don't think LayerDrop they way it's used can be used with more than one GPU.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>FYI, I have the same issue when setting mask_time_prob to 0<|||||>Hey @voidful - could you add a reproducible code snippet here? :-)
transformers
11,451
closed
mBART and DataCollatorForLanguageModeling: index -1 is out of bounds for dimension 1 with size N
## Environment info - `transformers` version: 4.5.1 - Platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core - Python version: 3.6.8 - PyTorch version (GPU?): 1.7.1+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): mBART The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce I would like to finetune mBART with a parallel corpus (custom dataset). Steps to reproduce the behavior: 1. Load an MBartForConditionalGeneration and an MBartTokenizer with pre-trained weights. 2. DataCollatorForLanguageModeling as the data collator. 3. I use a Trainer and a custom torch Dataset class. I noticed the problem arises when combining mBART and DataCollatorForLanguageModeling. I found a similar issue without no specific solution: https://github.com/huggingface/transformers/issues/9417. Here is my error stack: ``` File "main.py", line 99, in <module> main() File "main.py", line 85, in main resume_from_checkpoint=LANG_MODEL_PATH + 'last_model' if os.path.exists(LANG_MODEL_PATH + 'last_model') else None File "/var/python3envs/transformers-4.5.1/lib/python3.6/site-packages/transformers/trainer.py", line 1120, in train tr_loss += self.training_step(model, inputs) File "/var/python3envs/transformers-4.5.1/lib/python3.6/site-packages/transformers/trainer.py", line 1524, in training_step loss = self.compute_loss(model, inputs) File "/var/python3envs/transformers-4.5.1/lib/python3.6/site-packages/transformers/trainer.py", line 1556, in compute_loss outputs = model(**inputs) File "/var/python3envs/transformers-4.5.1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/var/python3envs/transformers-4.5.1/lib/python3.6/site-packages/transformers/models/mbart/modeling_mbart.py", line 1287, in forward decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id) File "/var/python3envs/transformers-4.5.1/lib/python3.6/site-packages/transformers/models/mbart/modeling_mbart.py", line 74, in shift_tokens_right decoder_start_tokens = prev_output_tokens.gather(1, index_of_eos).squeeze() RuntimeError: index -1 is out of bounds for dimension 1 with size 309 33%|███████████████ | 1/3 [00:29<00:58, 29.07s/it] ```
04-26-2021 11:46:39
04-26-2021 11:46:39
Hi @AdrianNunez Could you post the for of the `input_ids` and `labels`. As explained in the [docs](https://huggingface.co/transformers/model_doc/mbart.html#training-of-mbart), mBART expects `input_ids` and `labels` in a certain format. `labels` are prepared with the format `ids [eos, tgt_lang_code]` and then the `shift_tokens_right` function prepares `decoder_input_ids` by shifting the `labels` to right so `decoder_input_ids` become `[tgt_lang_code] ids [eos]` so from the error, it seem that there is either `eos` or `tgt_lang_code` code missing in the labels. But if this is how you want to use it then you should provide the `deocder_input_ids` manually. Also `DataCollatorForLanguageModeling` is not really meant to be used with mBART, it's intended MLM and auto-regressive models like BERT and GPT, so it might not work with mBART which is expected. <|||||>Hi @patil-suraj, thank you for your answer. This is an input example: ``` {'input_ids': tensor([ 33424, 6, 95866, 216479, 104, 3934, 10, 5744, 41, 22, 6, 4, 10, 23182, 6, 5, 2, 250004, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'special_tokens_mask': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), 'labels': tensor([ 3786, 11281, 293, 13173, 90929, 23, 6, 4, 293, 19190, 59486, 6, 5, 2, 250019, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])} ``` And decoding the input and labels: ``` ['Beat', '', 'rix', 'evolve', 'd', 'into', 'a', 'modern', 'que', 'en', '', ',', 'a', 'professional', '', '.', '</s>', 'en_XX', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>'] ['Ze', 'werd', 'een', 'moderne', 'koning', 'in', '', ',', 'een', 'vak', 'vrouw', '', '.', '</s>', 'nl_XX', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>'] ``` > Also DataCollatorForLanguageModeling is not really meant to be used with mBART, it's intended MLM and auto-regressive models like BERT and GPT, so it might not work with mBART which is expected. Thank you for the advice. Is there an specific data collator for the BART model family? Thank you in advance.<|||||>Thanks, I will take a look. > Thank you for the advice. Is there an specific data collator for the BART model family? If you want to train mBART for [translation](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation) or [summrization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) you could take a look at these examples <|||||>> If you want to train mBART for translation or summrization you could take a look at these examples Thank you for the links and the help. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,450
closed
[Black] Pin Version
Pin black until repo will be re-style with black 21.4b0.
04-26-2021 11:39:31
04-26-2021 11:39:31
After discussion with @lhoestq & @sgugger, upgrading is the better option => so merge https://github.com/huggingface/transformers/pull/11442<|||||>Shouldn't we close this ?
transformers
11,449
closed
Clarify description of the is_split_into_words argument
# What does this PR do? Clarifies the description of the `is_split_into_words` argument which is used in the tokenizers. Closes #11333 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Issue: #11333 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? No additional tests were needed, I only clarified docs. ## Who can review? I initially discussed this with @LysandreJik in #11333. I think @sgugger can review it too! Thank you for taking the time to review this! NB the failed CI test seems unrelated.
04-26-2021 10:33:28
04-26-2021 10:33:28
Thank you for the feedback!
transformers
11,448
closed
Activating gradient checkpointing
Working on a script which uses `GPTNeoForCausalLM` as the model object. As I understand it, gradient checkpointing requires checkpointing every layer. How do I change `GPTNeoForCausalLM` to incorporate gradient checkpointing, given that it doesn't show the layers explicitly but rather uses ``` self.transformer = GPTNeoModel(config) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) ``` PS. working on [visua-grounding](https://github.com/EleutherAI/visual-grounding/tree/main) with which @stas00 has already been a huge help, and knows the issues.
04-26-2021 09:48:12
04-26-2021 09:48:12
In some places I think "activation checkpointing" is referred to as "gradient checkpointing". The former sounds more logical as it's activations that aren't being saved. So it's https://pytorch.org/docs/stable/checkpoint.html. It should be already there, as you can see the `GPTNeoModel` has it setup: https://github.com/huggingface/transformers/blob/bc2571e61c985ec82819cf01ad038342771c94d0/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L834<|||||>Ah. Thanks for the clarification.
transformers
11,447
closed
Google Colab TypeError: expected str, bytes or os.PathLike object, not NoneType
- `transformers` version: 4.5.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 (False) - Tensorflow version (GPU?): 2.4.1 (False) Models I am using are **RoBERTa** (xlm-roberta-large) and **BERT** (bert-base-multilingual-cased) ### The problem arises when using: In the beginning of April I started getting this error without any changes on my side. I just loaded my old Colab notebook (that worked well few months before that). Now I still getting this error and don't know what to do. ### The tasks I am working on is: Just playing with models ### Steps to reproduce the behavior: 1. Open Google Colab 2. Run code below 3. Enjoy Error Message ``` !pip3 install transformers import torch from transformers import pipeline, XLMTokenizer, XLMWithLMHeadModel model_bert = 'bert-base-multilingual-cased' model_roberta = 'xlm-roberta-large' tokenizer = XLMTokenizer.from_pretrained('xlm-roberta-large') model = XLMWithLMHeadModel.from_pretrained('xlm-roberta-large') ``` ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-5-09548189c94e> in <module>() 4 model_bert = 'bert-base-multilingual-cased' 5 model_roberta = 'xlm-roberta-large' ----> 6 tokenizer = XLMTokenizer.from_pretrained('xlm-roberta-large') 7 model = XLMWithLMHeadModel.from_pretrained('xlm-roberta-large') 2 frames /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1708 1709 return cls._from_pretrained( -> 1710 resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs 1711 ) 1712 /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs) 1779 # Instantiate tokenizer. 1780 try: -> 1781 tokenizer = cls(*init_inputs, **init_kwargs) 1782 except OSError: 1783 raise OSError( /usr/local/lib/python3.7/dist-packages/transformers/models/xlm/tokenization_xlm.py in __init__(self, vocab_file, merges_file, unk_token, bos_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens, lang2id, id2lang, do_lowercase_and_remove_accent, **kwargs) 642 self.zh_word_tokenizer = None 643 --> 644 with open(vocab_file, encoding="utf-8") as vocab_handle: 645 self.encoder = json.load(vocab_handle) 646 self.decoder = {v: k for k, v in self.encoder.items()} TypeError: expected str, bytes or os.PathLike object, not NoneType ``` Also, I tried to change from `tokenizer = XLMTokenizer.from_pretrained('xlm-roberta-large')` to `tokenizer = XLMTokenizer.from_pretrained(model_roberta)` or using another model `tokenizer = XLMTokenizer.from_pretrained('bert-base-multilingual-cased')`but got same error
04-26-2021 09:43:01
04-26-2021 09:43:01
My issue is quite similar to #10756 <|||||>Hi! I believe you're using XLM models/tokenizers with XLM-R checkpoints. Have you tried using `XLMRobertaTokenizer` and `XLMRobertaLMHeadModel` instead? You're also trying to load a BERT checkpoint in an XLM tokenizer, this won't work. If you want to load any checkpoint without worrying about the tokenizer/model architecture, I would recommend you use the `Auto*` instead: ```py import torch from transformers import pipeline, AutoTokenizer, AutoModelWithLMHead model_bert = 'bert-base-multilingual-cased' model_roberta = 'xlm-roberta-large' tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large') model = AutoModelWithLMHead.from_pretrained('xlm-roberta-large') ```<|||||>Thank you very much. Everything works well now! <|||||>> Hi! I believe you're using XLM models/tokenizers with XLM-R checkpoints. Have you tried using `XLMRobertaTokenizer` and `XLMRobertaLMHeadModel` instead? > > You're also trying to load a BERT checkpoint in an XLM tokenizer, this won't work. If you want to load any checkpoint without worrying about the tokenizer/model architecture, I would recommend you use the `Auto*` instead: > > ```python > import torch > from transformers import pipeline, AutoTokenizer, AutoModelWithLMHead > model_bert = 'bert-base-multilingual-cased' > model_roberta = 'xlm-roberta-large' > tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large') > model = AutoModelWithLMHead.from_pretrained('xlm-roberta-large') > ``` thank you a lot
transformers
11,446
closed
[wav2vec] deepspeed eval bug in the case of >1 gpus
## Environment info - `transformers` version: 4.5.1 - Platform: Linux-4.15.0-140-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.8.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <2,4> - Using distributed or parallel set-up in script?: <distributed> ### Who can help @stas00 @patrickvonplaten @patil-suraj ## Information I'm working on wav2vec2.0 using the following official script of huggingface. https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py I am trying to finetune huggingface model with multiple gpus using deepspeed. ``` deepspeed --num_gpus=1 run_common_voice.py --deepspeed ds_config.json --do_train --do_eval ``` works, but ``` deepspeed --num_gpus=2 run_common_voice.py --deepspeed ds_config.json --do_train --do_eval ``` stops working and freezes at the end of eval. The progress bar is 100% done but the eval result is not returned and it freezes. ## To reproduce This is how to reproduce! https://colab.research.google.com/drive/1VRCGcnhBlrMFYQ5aaNebucZuja-WB2I2?usp=sharing Steps to reproduce the behavior: 1. Install deepspeed 2. Add `with autocast():` after line 481 in run_common_voice.py 3. Set param: `--deepspeed ds_config.json --do_train --do_eval` 4. Run run_common_voice.py using deepspeed with 1> gpus ds_config has the following parameters. ```ds_config.json { "fp16": { "enabled": "true", "loss_scale": 0, "loss_scale_window": 1000, "hysteresis": 2, "min_loss_scale": 1, "opt_level": "O3" }, "steps_per_print": 100, "wall_clock_breakdown": "false" } ``` ## Expected behavior The finetuning eval should be executed without freezing.
04-26-2021 09:41:44
04-26-2021 09:41:44
deepspeed doesn't work with `autocast`, it has its own way of dealing with mixed precision, if you look in the `trainer.py` it's carefully bypassed. does the problem go away if you remove `autocast`?<|||||>@stas00 Thank you for your reply! When I deleted `autocast` and ran it, I got the error `RuntimeError (Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same)`. I ran it with autocast to eliminate this error. FYI, when I do not do_eval or use only 1 GPU, the code run fine with autocast and deepspeed. The full text of the error is below. ``` File "run_common_voice.py", line 512, in <module> main() File "run_common_voice.py", line 484, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1240, in train tr_loss += self.training_step(model, inputs) File "run_common_voice.py", line 232, in training_step loss = self.compute_loss(model, inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1667, in compute_loss outputs = model(**inputs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/deepspeed/runtime/engine.py", line 928, in forward loss = self.module(*inputs, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1050, in forward return_dict=return_dict, File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 828, in forward hidden_states = self.feature_extractor(input_values) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 253, in forward hidden_states = conv_layer(hidden_states) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 156, in forward hidden_states = self.conv(hidden_states) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 263, in forward return self._conv_forward(input, self.weight, self.bias) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 260, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same ``` You can check this error in the last cell of the following colab. https://colab.research.google.com/drive/1VRCGcnhBlrMFYQ5aaNebucZuja-WB2I2?usp=sharing<|||||>Thank you for making this reproducible, @tommy19970714 - I haven't worked yet with wav2vec, so I will have a look and get back to you.<|||||>OK, this is a new type of model that requires a special type of handling. The NLP models get `long` inputs which get converted to the same dtype as the embedding weights, which under deepspeed/fp16 are `float16`. Currently deepspeed does `model.half`. This model however receives inputs that are `float32` and it doesn't check whether the model weights are fp16 or not. Hence the error. So this is one way to fix it: ``` diff --git a/src/transformers/models/wav2vec2/modeling_wav2vec2.py b/src/transformers/models/wav2vec2/modeling_wav2vec2.py index 98123bdd3..639c2bc13 100755 --- a/src/transformers/models/wav2vec2/modeling_wav2vec2.py +++ b/src/transformers/models/wav2vec2/modeling_wav2vec2.py @@ -153,7 +153,7 @@ class Wav2Vec2LayerNormConvLayer(nn.Module): self.activation = ACT2FN[config.feat_extract_activation] def forward(self, hidden_states): - hidden_states = self.conv(hidden_states) + hidden_states = self.conv(hidden_states.to(dtype=self.conv.weight.dtype)) hidden_states = hidden_states.transpose(-2, -1) hidden_states = self.layer_norm(hidden_states) ``` The test I was using is: ``` CUDA_VISIBLE_DEVICES=0 deepspeed --num_gpus=1 \ examples/research_projects/wav2vec2/run_common_voice.py \ --model_name_or_path="facebook/wav2vec2-large-xlsr-53" --dataset_config_name="tr" \ --output_dir=./wav2vec2-large-xlsr-turkish-demo --overwrite_output_dir --num_train_epochs="5" \ --per_device_train_batch_size="16" --learning_rate="3e-4" --warmup_steps="500" \ --evaluation_strategy="steps" --save_steps="5" --eval_steps="5" --logging_steps="5" \ --save_total_limit="3" --freeze_feature_extractor --feat_proj_dropout="0.0" --layerdrop="0.1" \ --gradient_checkpointing --fp16 --group_by_length --do_train --do_eval --deepspeed \ tests/deepspeed/ds_config_zero2.json ``` Could probably move it to the top-level layer so it'd work in all cases, if this exact path isn't always taken. So this overcomes: ``` RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same ``` but now running into: ``` File "examples/research_projects/wav2vec2/run_common_voice.py", line 512, in <module> main() File "examples/research_projects/wav2vec2/run_common_voice.py", line 484, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 1240, in train tr_loss += self.training_step(model, inputs) File "examples/research_projects/wav2vec2/run_common_voice.py", line 232, in training_step loss = self.compute_loss(model, inputs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 1667, in compute_loss outputs = model(**inputs) File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl return forward_call(*input, **kwargs) File "/mnt/nvme1/code/github/00optimize/deepspeed/deepspeed/runtime/engine.py", line 942, in forward loss = self.module(*inputs, **kwargs) File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl return forward_call(*input, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1076, in forward loss = F.ctc_loss( File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/functional.py", line 2436, in ctc_loss return torch.ctc_loss( RuntimeError: "ctc_loss_cuda" not implemented for 'Half' ``` so need to look more to see what to do there, probably need to switch to float32 just for that op. However, it appears that may be this model can't be trained/eval'ed in fp16/mixed precision? When I run: ``` CUDA_VISIBLE_DEVICES=0 python examples/research_projects/wav2vec2/run_common_voice.py \ --model_name_or_path="facebook/wav2vec2-large-xlsr-53" --dataset_config_name="tr" \ --output_dir=./wav2vec2-large-xlsr-turkish-demo --overwrite_output_dir --num_train_epochs="5" \ --per_device_train_batch_size="16" --learning_rate="3e-4" --warmup_steps="500" \ --evaluation_strategy="steps" --save_steps="5" --eval_steps="5" --logging_steps="5" \ --save_total_limit="3" --freeze_feature_extractor --feat_proj_dropout="0.0" --layerdrop="0.1" \ --gradient_checkpointing --fp16 --group_by_length --do_train --do_eval ``` I see: ``` {'loss': nan, 'learning_rate': 4.2e-06, 'epoch': 0.05} ``` We have multiple models that won't train under `fp16`-mixed precision, because they were pretrained in `bfloat16` which doesn't lend to `fp16` numerical range. Deepspeed devs are working on adding the fp32 mode (next release hopefully). https://github.com/microsoft/DeepSpeed/pull/1004 p.s. please don't mix `amp` with running modes that don't use `amp` (deepspeed is one of them) <|||||>Hi, @stas00 Thanks for your help! (I am working together with @tommy19970714 ) I saw your tweet about the new release of version 0.3.16. https://github.com/microsoft/DeepSpeed/releases/tag/v0.3.16 https://huggingface.co/transformers/master/main_classes/trainer.html#fp32-precision I set the `deepspeed.json` config to `auto`, referring to the article. ```JSON { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "hysteresis": 2, "min_loss_scale": 1, "opt_level": "O3" }, "steps_per_print": 100, "wall_clock_breakdown": "false" } ``` In addition to your suggestion, I made some changes to the model file to convert `log_probs` to float32: ```python diff --git a/src/transformers/models/wav2vec2/modeling_wav2vec2.py b/src/transformers/models/wav2vec2/modeling_wav2vec2.py index ba548dc3d..ce2ecdbe3 100755 --- a/src/transformers/models/wav2vec2/modeling_wav2vec2.py +++ b/src/transformers/models/wav2vec2/modeling_wav2vec2.py @@ -153,7 +153,7 @@ class Wav2Vec2LayerNormConvLayer(nn.Module): self.activation = ACT2FN[config.feat_extract_activation] def forward(self, hidden_states): - hidden_states = self.conv(hidden_states) + hidden_states = self.conv(hidden_states.to(dtype=self.conv.weight.dtype)) hidden_states = hidden_states.transpose(-2, -1) hidden_states = self.layer_norm(hidden_states) @@ -1071,10 +1071,15 @@ class Wav2Vec2ForCTC(Wav2Vec2PreTrainedModel): flattened_targets = labels.masked_select(labels_mask) log_probs = F.log_softmax(logits, dim=-1).transpose(0, 1) + # log_probs = log_probs.to(dtype=torch.float32), # doesn't work here with torch.backends.cudnn.flags(enabled=False): loss = F.ctc_loss( - log_probs, + log_probs.to(dtype=torch.float32), + # log_probs.to(dtype=torch.bfloat16), + # log_probs, flattened_targets, input_lengths, target_lengths, ``` Then it somehow worked! Does this seem to be a proper fix? I might not be fully understanding the type differences tho. Also, what else can I do to have it merged into the main branch? I am willing to contribute, but I am not sure if the code is good enough. I assume it is missing config handling. <|||||>Thank you for suggested adjustments, @qqhann For the proper solution we shouldn't mess with the model ;) The inputs `dtype` change is normally done inside the training loop, because it knows the context of the training. We just didn't need to do it until now, since as I mentioned earlier for NLP models we get the inputs adjusted to the right type through embedding lookup, so this is different. One of the important parts here is to add tests for each of these situations. What would be really useful is if you could help with creating a tiny wav2vec2 random model, to enable quick functional tests. Here are some examples of such scripts: - https://huggingface.co/stas/mt5-tiny-random/blob/main/mt5-make-tiny-model.py - https://huggingface.co/stas/t5-very-small-random/blob/main/t5-make-very-small-model.py In both cases it takes a normal model and reshapes it to a much smaller size. Usually the hard part is to figure out the non-model parts - dicts, tokenizers, etc. I don't know yet anything about wav2vec2 so it'd help if you had the know-how to create it. The idea behind a tiny model is that it runs just like a normal model, but its weights are random, it's very small ~5-10MB or even smaller, it loads fast, and of course it produces random results. This is perfect for functional testing. If you're not sure how to approach it, that's alright too. We will figure it out.<|||||>To update: @patrickvonplaten is kindly going to create a few tiny models and using his tiny `--dataset_name=patrickvonplaten/librispeech_asr_dummy` it should be possible to use `examples/research_projects/wav2vec2/run_asr.py` as the dev and test bench, so when this happens I should be able to complete this work. Until then your workaround is probably good enough if it's working for you.<|||||>You're welcome to follow my progress at fixing this issue at https://github.com/huggingface/transformers/pull/11638 ZeRO-2 works fully. ZeRO-3 still has one issue, but fp32 works. Do try and let me know if you run into any problems. <|||||>@stas00 Thanks for letting me know! I'll keep an eye on it!<|||||>Update: with deepspeed master both zero-2 and zero-3 now work https://github.com/huggingface/transformers/pull/11638 It's ready to be merged. Please give it a try.
transformers
11,445
closed
CLIP
# What does this PR do? This PR adds the [CLIP](https://github.com/openai/CLIP) model. CLIP is a multi-modal vision+language model which uses a transformer model for encoding both the images and text. - The model here is designed such that both `CLIPTextModel` and `CLIPVisionModel` can be loaded independently, and composed together to get the `CLIPModel`. - Both `CLIPTextModel` and `CLIPVisionModel` use the shared encoder class `CLIPEncoder`. - The config classes are also kept in separate i.e `CLIPTextConfig` and `CLIPVisionConfig`. This could be kept in one config class but then we would have to add two arguments for each config value i.e `text_hidden_size` for text model `vision_hidden_size` for vision model etc. One issue here is that when we load an individual model, like `CLIPTextModel` using the weights of the whole `CLIPModel` the config ends up containing both text and vision config dicts, this does not cause any issue but could be confusing to look at. One important thing to note here is that CLIP's tokenizer does have a pad token defined for it, but they use 0 as `pad_token_id` to pad the text, but the token, but the token associated with 0 is not a pad token. So here, to able to do padding I've added `pad_token_id` as a `property` which returns 0. I would be happy to hear if there is some other way to achieve this. Also, I've added a processor class here but not sure if we really need it for this model. We could easily use the extractor for the vision model and tokenizer for the text model. Would love your review about the design @LysandreJik , @patrickvonplaten , @sgugger.
04-26-2021 09:06:55
04-26-2021 09:06:55
@sgugger > is it possible to add a fast version of the tokenizer? Yes, will add the fast version as well. > why are there two classes for the vision model and the text model? They have the exact same forward, so we should only have one class IMO. Added two versions so that one could just load `CLIPTextModel` or `CLIPVisionModle` directly from `CLIPModel`'s weights. If we just keep single modules then it's not possible to load the weights from the full model because then the keys won't match. The `CLIPModel` has these extra keys `text_model` and `vision_model`, hence the two extra modules with the same keys. This would allow one to use the vision model in some other downstream tasks like adding a liner layer on top or using it as an image encoder in some other settings. Not sure if users will actually want this, but this does not really add much complexity to the code IMO.<|||||>All green!! I've addressed most of the suggestions, notably - new processor API => as discussed with @patrickvonplaten and @LysandreJik processor's `__call__` now accepts both the text and/or images and returns a single encoding dict. `as_target_processor` is now removed. The API is as follows ```python3 model = CLIPModel.from_pretrained(checkpoint) inputs = CLIPProcessor(texts=..., images=..., some_other_kwargs) outputs = model(**inputs) ``` - the `encode_text` and `encode_image` methods are renamed to `get_text_features` and `get_image_features` - Added fast tokenizer. Ready for second review @LysandreJik @sgugger @patrickvonplaten <|||||>> All green!! > I've addressed most of the suggestions, notably > > * new processor API => as discussed with @patrickvonplaten and @LysandreJik processor's `__call__` now accepts both the text and/or images and returns a single encoding dict. `as_target_processor` is now removed. The API is as follows > > ```python > model = CLIPModel.from_pretrained(checkpoint) > inputs = CLIPProcessor(texts=..., images=..., some_other_kwargs) > outputs = model(**inputs) > ``` > > * the `encode_text` and `encode_image` methods are renamed to `get_text_features` and `get_image_features` > * Added fast tokenizer. > > Ready for second review @LysandreJik @sgugger @patrickvonplaten How to use processor in __getitem()__? I got an error"RuntimeError: stack expects each tensor to be equal size, but got [1, 11] at entry 0 and [1, 13] at entry 1" ,as follow: def __getitem__(self, idx): img_id = self.img_ids[idx] # randomly pick one caption from the image captions text = random.choice(self.img_id_to_captions[img_id]) img_filename = self.img_id_to_filename[img_id] img_path = op.join(self.img_dir, img_filename) img = Image.open(img_path) input = self.processor(text = text, images = img, return_tensors = "pt", padding = True) return input I thought processor might need other args, inherited from pretraintokenizerbase,such as padding.But I couldn't find it at processor's __call__ in doc.<|||||>Hi @lycfight could you please open an issue with a minimal code snippet so we could take a look. Thanks :) <|||||>> Hi @lycfight could you please open an issue with a minimal code snippet so we could take a look. Thanks :) of course
transformers
11,444
closed
Variable Correction for Consistency in Distillation Example
As the error comes from the incosistency of variable meaning number of gpus in parser and its actual usage in the train.py script, 'gpus' and 'n_gpu' respectively, the correction makes the example work # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) #11441 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->@VictorSanh
04-26-2021 08:42:31
04-26-2021 08:42:31
transformers
11,443
closed
BERT model gets fairly random results
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.0 - Platform: linux - Python version: 3.8 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @LysandreJik, @sgugger ## Information Model I am using BERT-base-uncased model with run_glue.py on MRPC dataset, there are substantial differences in the results each time I run the codes, sometimes it reaches 5% percent. Too much variation makes the results not reliable, and this is quite a big issue. Thanks for your help on this. This might be a bug in the trainer/or the model itself. first run: ``` [INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> epoch = 3.0 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> eval_average_metrics = 0.8300182784978398 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> eval_mem_cpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> eval_mem_cpu_peaked_delta = 2MB [INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> eval_mem_gpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> eval_mem_gpu_peaked_delta = 264MB [INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> mrpc_eval_accuracy = 0.799 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> mrpc_eval_combined_score = 0.83 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> mrpc_eval_f1 = 0.861 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> mrpc_eval_loss = 0.4643 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> mrpc_eval_runtime = 0:00:00.38 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> mrpc_eval_samples_per_second = 529.617 ``` second run: ``` [INFO|trainer_pt_utils.py:722] 2021-04-26 10:02:59,294 >> ***** test metrics ***** [INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> epoch = 3.0 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> eval_average_metrics = 0.8090236094437775 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> eval_mem_cpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> eval_mem_cpu_peaked_delta = 2MB [INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> eval_mem_gpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> eval_mem_gpu_peaked_delta = 264MB [INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> mrpc_eval_accuracy = 0.7745 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> mrpc_eval_combined_score = 0.809 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> mrpc_eval_f1 = 0.8435 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> mrpc_eval_loss = 0.4631 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,295 >> mrpc_eval_runtime = 0:00:00.35 [INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,295 >> mrpc_eval_samples_per_second = 567.515 ``` ## To reproduce Steps to reproduce the behavior: Please run run_glue.py default script on MRPC. ## Expected behavior The model needs to reproduce the same results each time it runs.
04-26-2021 08:15:47
04-26-2021 08:15:47
Please give us the whole command you are running as we can't reproduce without it. Are you properly setting the seed? Depending on the seed used, the results differ a lot on MRPC, since it's a tiny dataset. This is known and there have been [published papers](https://arxiv.org/pdf/2002.06305.pdf) on this.<|||||>Hi, thank you the issue resolved with moving the codes to version 4.6.dev thanks
transformers
11,442
closed
Upgrade Black to version 21.4b0
This PR reformats all files with Black's newest version 21.4b0.
04-26-2021 07:07:59
04-26-2021 07:07:59
Closing this PR in favor of https://github.com/huggingface/transformers/pull/11450
transformers
11,441
closed
Minor error on example distillation script
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: Linux-4.18.0-147.el8.x86_64-x86_64-with-Ubuntu-16.04-xenial - Python version: 3.7.10 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: True - Using distributed or parallel set-up in script?: True ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> I think @VictorSanh might help since it's about a minor bug in distillation. ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) examples/research_projects/distillation The tasks I am working on is: * [] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) It's not GLUE/SQUaD but official BookCorpus and Wikipedia datasets from `datasets` ## To reproduce Steps to reproduce the behavior: 1. Convert concatenation of bookcorpus and Wikipedia text from `datasets` to `txt` file. 2. Separate it with `\n` 3. Run scripts following *A. Preparing the data* <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` Traceback (most recent call last): File "train.py", line 322, in <module> main() File "train.py", line 223, in main init_gpu_params(args) File "/volume/compression_and_distillation/transformers/examples/distillation/utils.py", line 55, in init_gpu_params if params.n_gpu <= 0: AttributeError: 'Namespace' object has no attribute 'n_gpu' Traceback (most recent call last): File "train.py", line 322, in <module> main() File "train.py", line 223, in main init_gpu_params(args) File "/volume/compression_and_distillation/transformers/examples/distillation/utils.py", line 55, in init_gpu_params if params.n_gpu <= 0: AttributeError: 'Namespace' object has no attribute 'n_gpu' Traceback (most recent call last): File "train.py", line 322, in <module> main() File "train.py", line 223, in main init_gpu_params(args) File "/volume/compression_and_distillation/transformers/examples/distillation/utils.py", line 55, in init_gpu_params if params.n_gpu <= 0: AttributeError: 'Namespace' object has no attribute 'n_gpu' Traceback (most recent call last): File "train.py", line 322, in <module> main() File "train.py", line 223, in main init_gpu_params(args) File "/volume/compression_and_distillation/transformers/examples/distillation/utils.py", line 55, in init_gpu_params if params.n_gpu <= 0: AttributeError: 'Namespace' object has no attribute 'n_gpu' Traceback (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/venv/distill/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module> main() File "/home/venv/distill/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main cmd=cmd) ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The error comes because of the inconsistency of variable name as `n_gpu` in the trainer.py script but `gpus` in parsing. It can easily be solved changing `gpus` when parsing to `n_gpu`.
04-26-2021 06:50:07
04-26-2021 06:50:07
transformers
11,440
closed
Feedback whilst resuming
# 🚀 Feature request Add some sort of progress bar during resumption of training from a checkpoint ## Motivation I resumed training from a checkpoint mid-way through a 100 Epoch training. The progress bar sat on zero for quite some time, and I could see CPU activity but no GPU's were active. Eventually the progress bar jumped to 50% and training resumed - I assume it's doing some sort of initialisation. It would be nice if there were some indication that some progress is being made as it's not obvious what's occuring
04-26-2021 06:39:54
04-26-2021 06:39:54
This has been fixed by #11324. If you use a source install, you will be able to use (or in this case see) this feature :-)
transformers
11,439
closed
[BigBird] enable BigBirdForQuestionAnswering to return pooler output
# What does this PR do? This PR will enable `BigBirdForQuestionAnswering` to return pooler output. This can be useful for the tasks involving predicting category along with answer eg: [Natural Questions dataset](https://huggingface.co/datasets/natural_questions) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-26-2021 05:15:08
04-26-2021 05:15:08
transformers
11,438
closed
[docs] fix invalid class name
This PR fixes misnamed `TrainerArgument` The CI failures are unrelated - this can be safely merged. @sgugger
04-26-2021 05:00:26
04-26-2021 05:00:26
transformers
11,437
closed
[Makefile] make sure to test against the local checkout
Currently some scripts in `Makefile` run against the pre-installed `transformers` rather than the checkout it's supposed to test. This PR fixes that by setting ` PYTHONPATH="src"`. I had to fix that as I was getting at the end of `make fixup`: ``` python utils/check_repo.py 2021-04-25 21:54:53.850434: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 Checking all models are properly tested. Traceback (most recent call last): File "utils/check_repo.py", line 481, in <module> check_repo_quality() File "utils/check_repo.py", line 473, in check_repo_quality check_all_models_are_tested() File "utils/check_repo.py", line 233, in check_all_models_are_tested modules = get_model_modules() File "utils/check_repo.py", line 147, in get_model_modules modeling_module = getattr(model_module, submodule) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/file_utils.py", line 1666, in __getattr__ value = self._get_module(name) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/albert/__init__.py", line 120, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/albert/modeling_tf_albert.py", line 43, in <module> from ...modeling_tf_utils import ( File "src/transformers/modeling_tf_utils.py", line 32, in <module> from .file_utils import ( ImportError: cannot import name 'PushToHubMixin' from 'transformers.file_utils' (/mnt/nvme1/code/huggingface/transformers-master/src/transformers/file_utils.py) ``` The errors are from the pre-installed `transformers` and not the clone I'm working on. The CI failures are unrelated - this can be safely merged. @sgugger
04-26-2021 04:59:56
04-26-2021 04:59:56
transformers
11,436
closed
梯度爆炸问题
我在运行使用apex后用bert进行微调训练,最后报错提示"No module named 'amp_C'",并且还会梯度爆炸,请问是什么原因,该如何解决呢? 错误如下: Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods. Defaults for this optimization level are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic Processing user overrides (additional kwargs that are not None)... After processing overrides, optimization options are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'",) Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0
04-26-2021 00:53:48
04-26-2021 00:53:48
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,435
closed
convert gpt2 from tensorflow to pytorch
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` = 4.5.1 - PyTorch version (GPU?) = = 1.8.1+cu101 command : !python3 /content/transformers/src/transformers/models/gpt2/convert_gpt2_original_tf_checkpoint_to_pytorch.py \ --gpt2_checkpoint_path=/content/drive/MyDrive/tensorflowCheckpoints/model.ckpt-50000 \ --pytorch_dump_folder_path=/content/drive/MyDrive/convertpyorch/torch_model-500gpt2.bin \ --gpt2_config_file=/content/drive/MyDrive/tensorflowCheckpoints/config2.json Error : Traceback (most recent call last): File "/content/transformers/src/transformers/models/gpt2/convert_gpt2_original_tf_checkpoint_to_pytorch.py", line 68, in <module> convert_gpt2_checkpoint_to_pytorch(args.gpt2_checkpoint_path, args.gpt2_config_file, args.pytorch_dump_folder_path) File "/content/transformers/src/transformers/models/gpt2/convert_gpt2_original_tf_checkpoint_to_pytorch.py", line 39, in convert_gpt2_checkpoint_to_pytorch load_tf_weights_in_gpt2(model, config, gpt2_checkpoint_path) File "/usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 109, in load_tf_weights_in_gpt2 pointer = getattr(pointer, scope_names[0]) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 948, in __getattr__ type(self).__name__, name)) AttributeError: 'GPT2Model' object has no attribute '_step'
04-25-2021 23:04:46
04-25-2021 23:04:46
Hi @7AM7 I think this is because there is `_step` in TF checkpoint, which should be ignored when loading the weights. for this you should write your own conversion script. You could take and modify this function https://github.com/huggingface/transformers/blob/30f065890e77f2917895b175b9a1df503b89e202/src/transformers/models/gpt2/modeling_gpt2.py#L68 adding some check like this here would solve this ```python for name, shape in init_vars: if "_step" not in name name: ``` <|||||>i modified this function def load_tf_weights_in_gpt2 and ignored "_step" Like if name!="_step" or if "_step" not in name And still same error <|||||>in that case, you could check what extra variables are there in the `names` and then remove those from `names` and `arrays`. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,434
closed
Updating checkpoint for GPT2ForSequenceClassification #11334
# What does this PR do? This PR fixes the checkpoint for GPT2ForSequenceClassification. It sets it from `microsoft/dialogrpt` to `microsoft/DialogRPT-updown` Fixes # (issue) The identifier `microsoft/dialogrpt` is incorrect. When used, the weights of the linear layer at top are differently initialized at each execution, which gives different prediction results for same inputs. The checkpoint `microsoft/DialogRPT-updown` fixes that issue since it offers a pretrained classification head. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? Yes - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Yes [https://github.com/huggingface/transformers/issues/11334](url) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? No ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Hello @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-25-2021 21:26:24
04-25-2021 21:26:24
transformers
11,433
closed
tensorflow version is not able to pick the trained model from local directory in an air gapped system
hi , i have trained the TFBertForSequenceClassification model and i have to deploy the trained model in an air gapped server. code - from transformers import BertTokenizer, TFBertForSequenceClassification from transformers import InputExample, InputFeatures model1=TFBertForSequenceClassification.from_pretrained(local_path) tokenizer1=BertTokenizer.from_pretrained(loacal_path) ImportError: cannot import name 'TFBertForSequenceClassification' from 'transformers' (unknown location) Same code works if i am using the PyTorch version( BertForSequenceClassification)
04-25-2021 18:09:07
04-25-2021 18:09:07
Did you install TensorFlow in your environment? You might need a more recent TensorFlow version if so.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,432
closed
Typo fixes
# What does this PR do? Fix some typos in docs, comments, logging/errors ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
04-25-2021 17:41:07
04-25-2021 17:41:07
transformers
11,431
closed
Accepts BatchEncoding in LengthGroupedSampler
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Expands `LengthGroupedSampler` to accept `BatchEncoding`-based `Dataset` by auto inference of lengths of them as well as `dict`-based `Dataset`. Because `BatchEncoding` can be seen as a special type of dictionary in Python, it is useful to be accepted by `LengthGroupedSampler` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-25-2021 13:33:02
04-25-2021 13:33:02
transformers
11,430
closed
Fix `sp_model_kwargs` param missing at unpickle in `XLMRobertaTokenizer`
fix for #11429
04-25-2021 13:14:05
04-25-2021 13:14:05
This PR ist ready for merging from my point of view.
transformers
11,429
closed
`sp_model_kwargs` param missing at unpickle in `XLMRobertaTokenizer`
When `XLMRobertaTokenizer` is unpickled the `sp_model_kwargs` is not set. See: https://github.com/huggingface/transformers/blob/35cd8eed887891bee60194a95adc35b884f68f55/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L178 PS: I will provide a fix.
04-25-2021 13:08:10
04-25-2021 13:08:10
fix pr at #11430<|||||>Closed by #11430, thanks @PhilipMay!
transformers
11,428
closed
RoBERTa: ValueError: The two structures don't have the same sequence length. Input structure has length 5, while shallow structure has length 4.
## Environment info `transformers` version: 4.5.1 Platform: Google Colab Python version: 3.7 PyTorch version (GPU?): NA Tensorflow version (GPU?): 2.4.1 Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ### Who can help: @LysandreJik for Roberta issue @sgugger for trainer (tf_trainer) issue Models:Roberta: @LysandreJik Library: /transformers/trainer_tf.py: @sgugger ## Information Model I am using RoBERTa (roberta-base) for SQuAD 2.0 Question Answering task fine tuning exercise: The tasks I am working on is: I am using functions from official transformer scripts only with an official SQUaD 2.0 dataset task: (give the name) [RoBERTa_transformer_fine_tune_pre_trained_model_tftpu.zip](https://github.com/huggingface/transformers/files/6371875/RoBERTa_transformer_fine_tune_pre_trained_model_tftpu.zip) ## To reproduce Steps to reproduce the behavior: Run the attached script shared with GPU on Google Colab and it will give error when it tries to run trainer.train() with SQuAD 2.0 TF dataset. Error message: InvalidArgumentError: TypeError: `generator` yielded an element that did not match the expected structure. The expected structure was ({'input_ids': tf.int32, 'attention_mask': tf.int32, 'feature_index': tf.int64, 'qas_id': tf.string}, {'start_positions': tf.int64, 'end_positions': tf.int64, 'cls_index': tf.int64, 'p_mask': tf.int32, 'is_impossible': tf.int32}), but the yielded element was ({'input_ids': [0, 1779, 222, 12674, 1755, 386, 1959, 1406, 116, 2, 2, 12674, 12695, 272, 354, 6591, 10690, 1634, 12, 43732, 48229, 5605, 43621, 16948, 49066, 267, 35423, 10659, 282, 1090, 35423, 10278, 73, 19417, 12, 975, 2191, 12, 28357, 43, 36, 5400, 772, 204, 6, 14130, 43, 16, 41, 470, 3250, 6, 2214, 9408, 6, 638, 3436, 8, 3390, 4, 8912, 8, 1179, 11, 2499, 6, 1184, 6, 79, 3744, 11, 1337, 6970, 8, 7950, 9150, 25, 10, 920, 6, 8, 1458, 7, 9444, 11, 5, 628, 4525, 29, 25, 483, 3250, 9, 248, 947, 387, 1816, 12, 13839, 23313, 18, 7442, 4, 1554, 4628, 30, 69, 1150, 6, 4101, 16152, 10690, 1634, 6, 5, 333, 1059, 65, 9, 5, 232, 18, 275, 12, 11393, 1816, 1134, 9, 70, 86, 4, 2667, 25224, 794, 5, 800, 9, 12674, 12695, 18, 2453, 2642, 6, 34880, 9412, 11, 3437, 36, 35153, 238, 61, 2885, 69, 25, 10, 5540, 3025, 3612, 6, 2208, 292, 12727, 4229, 8, 3520, 5, 18919, 6003, 727, 346, 12, 1264, 7695, 22, 347, 36616, 11, 3437, 113, 8, 22, 30047, 5637, 845, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 897, in generator_py_func flattened_values = nest.flatten_up_to(output_types, values) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/nest.py", line 396, in flatten_up_to assert_shallow_structure(shallow_tree, input_tree) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/nest.py", line 324, in assert_shallow_structure check_types=check_types) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/nest.py", line 311, in assert_shallow_structure % (len(input_tree), len(shallow_tree))) ValueError: The two structures don't have the same sequence length. Input structure has length 5, while shallow structure has length 4. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/script_ops.py", line 249, in __call__ ret = func(*args) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py", line 620, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 905, in generator_py_func sys.exc_info()[2]) File "/usr/local/lib/python3.7/dist-packages/six.py", line 702, in reraise raise value.with_traceback(tb) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 897, in generator_py_func flattened_values = nest.flatten_up_to(output_types, values) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/nest.py", line 396, in flatten_up_to assert_shallow_structure(shallow_tree, input_tree) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/nest.py", line 324, in assert_shallow_structure check_types=check_types) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/nest.py", line 311, in assert_shallow_structure % (len(input_tree), len(shallow_tree))) TypeError: `generator` yielded an element that did not match the expected structure. The expected structure was ({'input_ids': tf.int32, 'attention_mask': tf.int32, 'feature_index': tf.int64, 'qas_id': tf.string}, {'start_positions': tf.int64, 'end_positions': tf.int64, 'cls_index': tf.int64, 'p_mask': tf.int32, 'is_impossible': tf.int32}), but the yielded element was ({'input_ids': [0, 1779, 222, 12674, 1755, 386, 1959, 1406, 116, 2, 2, 12674, 12695, 272, 354, 6591, 10690, 1634, 12, 43732, 48229, 5605, 43621, 16948, 49066, 267, 35423, 10659, 282, 1090, 35423, 10278, 73, 19417, 12, 975, 2191, 12, 28357, 43, 36, 5400, 772, 204, 6, 14130, 43, 16, 41, 470, 3250, 6, 2214, 9408, 6, 638, 3436, 8, 3390, 4, 8912, 8, 1179, 11, 2499, 6, 1184, 6, 79, 3744, 11, 1337, 6970, 8, 7950, 9150, 25, 10, 920, 6, 8, 1458, 7, 9444, 11, 5, 628, 4525, 29, 25, 483, 3250, 9, 248, 947, 387, 1816, 12, 13839, 23313, 18, 7442, 4, 1554, 4628, 30, 69, 1150, 6, 4101, 16152, 10690, 1634, 6, 5, 333, 1059, 65, 9, 5, 232, 18, 275, 12, 11393, 1816, 1134, 9, 70, 86, 4, 2667, 25224, 794, 5, 800, 9, 12674, 12695, 18, 2453, 2642, 6, 34880, 9412, 11, 3437, 36, 35153, 238, 61, 2885, 69, 25, 10, 5540, 3025, 3612, 6, 2208, 292, 12727, 4229, 8, 3520, 5, 18919, 6003, 727, 346, 12, 1264, 7695, 22, 347, 36616, 11, 3437, 113, 8, 22, 30047, 5637, 845, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## Expected behavior * 'squad_convert_examples_to_features' function should have taken care of what all features to be passed to the model based on pre-trained tokenizer defined. * If not then what should be steps to remove/add features required for albert model training/fine-tuning tasks should be documented somewhere [As per my knowledge it is not available anywhere]
04-25-2021 12:06:10
04-25-2021 12:06:10
Can someone look into it? as I am passing the same data structure as mentioned in above error i n trainer.train(). PFB. <_AssertCardinalityDataset shapes: ({input_ids: (None,), attention_mask: (None,), feature_index: (), qas_id: ()}, {start_positions: (), end_positions: (), cls_index: (), p_mask: (None,), is_impossible: ()}), types: ({input_ids: tf.int32, attention_mask: tf.int32, feature_index: tf.int64, qas_id: tf.string}, {start_positions: tf.int64, end_positions: tf.int64, cls_index: tf.int64, p_mask: tf.int32, is_impossible: tf.int32})><|||||>Hi @PremalMatalia, thank you for opening an issue. Pinging @Rocketknight1 as the TensorFlow developer. Please be aware that we're in the process of deprecating the `TFTrainer` and that we will not be maintaining it anymore as it doesn't offer features that cannot be handled by Keras directly. We're in the process of moving examples to Keras, and @Rocketknight1 has already started with the text classification example [here](https://github.com/huggingface/transformers/blob/master/examples/tensorflow/text-classification/run_text_classification.py). QA is sure to follow.<|||||>Thanks @LysandreJik for your response. Good to know that TFTrainer is being deprecated so I can focus on some other ways to fine-tune. Any other references to follow at this moment for question answering of SQuAD? <|||||>Hi! I'll take a look, but the error is quite convoluted. Can you link me to any examples you're following for this? If our code examples aren't working we definitely want to fix that.<|||||>@Rocketknight1 ...I was following below run_tf_squad.py file for fine tuning. https://github.com/huggingface/transformers/blob/d9c62047a8d75e18d2849d345ab3394875a712ef/examples/question-answering/run_tf_squad.py <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,427
closed
Fix link to the TPU launcher script in the pytorch examples
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Hi @sgugger, @patil-suraj, The link to the TPU launcher script in the pytorch examples is broken. Thanks ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-25-2021 11:12:50
04-25-2021 11:12:50
transformers
11,426
closed
[Flax] Add Electra models
# What does this PR do? - Implement Flax version of Electra model and classes for different downstream tasks: - `FlaxElectraModel` - `FlaxElectraForMaskedLM` - `FlaxElectraForPreTraining` - `FlaxElectraForMultipleChoice` - `FlaxElectraForQuestionAnswering` - `FlaxElectraForSequenceClassification` - `FlaxElectraForTokenClassification` Most of the code taken from FlaxBert code and the Pytorch Electra code, and also adapted code from the original PR from @chris-tng (credit where it's due since he started this in #9172 ). Running the tests (including the slow ones) works, and I already tested in my downstream task and seems to be working ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Tagging @patrickvonplaten @sgugger @chris-tng , feel free to tag other people
04-25-2021 10:48:13
04-25-2021 10:48:13
I think I address most comments (the only thing missing is removing the `from_pt` when the flax checkpoint gets upload)! Thanks for the swift feedback<|||||>Hey @CoderPat, We've merged a last design extension for the Flax design today, [here](https://github.com/huggingface/transformers/commit/f748bd424213ca8e76e6ad9ffe2beece2ff2655e) -> could you merge master into your PR one last time and adapt the code to add those extensions (example docstring + model outputs + all_attentions + all_hidden_states) - super sorry for the merge conflict again, but this will be the last one!
transformers
11,425
closed
ALBERT: The following keyword arguments are not supported by this model: ['cls_index', 'p_mask', 'is_impossible'].
## Environment info - `transformers` version: 4.5.1 - Platform: Google Colab - Python version: 3.7 - PyTorch version (GPU?): NA - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help: @LysandreJik for albert issue @sgugger for trainer (tf_trainer) issue Models:/transformers/models/albert/modeling_tf_albert.py: @LysandreJik Library: /transformers/trainer_tf.py: @sgugger ## Information Model I am using albert (albert-base-v2) for SQuAD 2.0 Question Answering task fine tuning exercise: The tasks I am working on is: - I am using functions from official transformer scripts only with an official SQUaD 2.0 dataset task: (give the name) https://colab.research.google.com/drive/13_ZEQJa_SNMTUh1OkOL1UfkWwObpGY2i?usp=sharing [transformer_fine_tune_pre_trained_model_tftpu.zip](https://github.com/huggingface/transformers/files/6371487/transformer_fine_tune_pre_trained_model_tftpu.zip) ## To reproduce Steps to reproduce the behavior: 1. Run the above script shared with GPU on Google Colab and it will give error when it tries to run trainer.train() with SQuAD 2.0 TF dataset. ## Error message: ValueError: in user code: /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:697 distributed_training_steps * self.args.strategy.run(self.apply_gradients, inputs) /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:639 apply_gradients * gradients = self.training_step(features, labels, nb_instances_in_global_batch) /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:622 training_step * per_example_loss, _ = self.run_model(features, labels, True) /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:742 run_model * outputs = self.model(features, training=training, **labels)[:2] /usr/local/lib/python3.7/dist-packages/transformers/models/albert/modeling_tf_albert.py:1341 call * inputs = input_processing( /usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py:351 input_processing * raise ValueError( ValueError: The following keyword arguments are not supported by this model: ['cls_index', 'p_mask', 'is_impossible']. ## Expected behavior - 'squad_convert_examples_to_features' function should have taken care of what all features to be passed to the model based on pre-trained tokenizer defined. - If not then what should be steps to remove/add features required for albert model training/fine-tuning tasks should be documented somewhere [As per my knowledge it is not available anywhere]
04-25-2021 08:09:51
04-25-2021 08:09:51
The function prepares the dataset for the standard models as well the XLNet model that requires more arguments. You will need to drop those columns for an Albert models. We are in the process of reworking the TensorFlow examples, so there should be one clearer example for QA soon!<|||||>Thanks @sgugger ... I tried to modify `squad_convert_examples_to_features` as in attached file to return without these columns but encountered anther error as below: [transformer_fine_tune_pre_trained_model_tftpu (1).zip](https://github.com/huggingface/transformers/files/6379976/transformer_fine_tune_pre_trained_model_tftpu.1.zip) ##ERROR: ValueError Traceback (most recent call last) <ipython-input-59-d85abec8ae26> in <module>() 1 # Training 2 if training_args.do_train: ----> 3 trainer.train() 4 trainer.save_model() 5 tokenizer.save_pretrained(training_args.output_dir) 10 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 975 except Exception as e: # pylint:disable=broad-except 976 if hasattr(e, "ag_error_metadata"): --> 977 raise e.ag_error_metadata.to_exception(e) 978 else: 979 raise ValueError: in user code: /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:697 distributed_training_steps * self.args.strategy.run(self.apply_gradients, inputs) /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:641 apply_gradients * self.optimizer.apply_gradients(list(zip(gradients, self.model.trainable_variables))) /usr/local/lib/python3.7/dist-packages/transformers/optimization_tf.py:232 apply_gradients * return super(AdamWeightDecay, self).apply_gradients(zip(grads, tvars), name=name, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:604 apply_gradients ** self._create_all_weights(var_list) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:783 _create_all_weights self._create_slots(var_list) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/adam.py:127 _create_slots self.add_slot(var, 'm') /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:844 add_slot .format(strategy, var)) ValueError: Trying to create optimizer slot variable under the scope for tf.distribute.Strategy (<tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x7f12026a45d0>), which is different from the scope used for the original variable (<tf.Variable 'tf_albert_for_question_answering/albert/embeddings/word_embeddings/weight:0' shape=(30000, 128) dtype=float32, numpy= array([[ 0.01270407, 0.05987824, -0.06812993, ..., -0.01226719, 0.00817283, 0.00785217], [ 0.00733008, -0.00101211, -0.01069043, ..., -0.00968418, -0.0400394 , -0.04233308], [-0.02059615, 0.007892 , 0.02363562, ..., 0.01533034, -0.00429517, -0.01246009], ..., [ 0.0135935 , 0.00349383, 0.01223597, ..., -0.05456466, 0.09235671, -0.05717891], [-0.00492554, -0.05208753, -0.00323149, ..., 0.03003517, 0.0196551 , 0.06015572], [ 0.03892251, -0.024089 , -0.01364627, ..., 0.04010094, 0.05124779, -0.03588157]], dtype=float32)>). Make sure the slot variables are created under the same strategy scope. This may happen if you're restoring from a checkpoint outside the scope<|||||>Can someone please help..? I am stuck here<|||||>I have managed to remove extra tokens from the dataset but then TFTrainer.train() got stuck in infinite loop without any errors or logs. Below is the link with latest code. Please suggest. https://colab.research.google.com/drive/17Rx2rkiqag6YAz_FnU9HyHYtqHJgpNs0?usp=sharing<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,424
closed
Simple questions about EncoderDecoderModel
First, thank you for great works! 1. Does that tie function work for sharing pretrained weight of encoder's embedding with weight of decoder's embedding?https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L185 If i want to use different tokenizer between encoder and decoder inputs, does tie function ignore sharing embedding such as strict option?
04-25-2021 08:09:45
04-25-2021 08:09:45
Hi @qute012 The `tie_weights` method ties all the weights of the encoder and decoder including the embedding. For this to work, the encoder and decoder need to be the same model (same class) i.e either BERT2BERT or ROBERTA2ROBERTA2 and with same size. > If i want to use different tokenizer between encoder and decoder inputs, does tie function ignore sharing embedding such as strict option? no, it does not ignore sharing embedding in that case because as I wrote above it expects both encoder and decoder to be the same model so implicitly assumes that the tokenizer will also be the same. But if that's what you want to do you could manually untie the embeddings or just re-initialize both of them so they won't be shared/tied.<|||||>Thanks to reply @patil-suraj For example, is it right that encoder's embedding weights can be adjusted by decoder's input? Then if i want to untie, should i remove or comment out below code manually? https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L183 do you have any plan to add parameter that choosing tie function in EncoderDecoderModel class? I know bert2bert is better performance than random decoder's embedding weights, but it requires to extending for experiment newly when each encoder and decoder use different vocabulary. If you okay, i will PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,423
closed
IBert: What would be the possible reason `IntLayerNorm` does not decrease the loss?
### Who can help @kssteven418 ## Information IBert The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## Problem Hello. I'm trying to use `IntLayerNorm` in my model. The model without the layer trains properly, but if I add the layer as follow the model suddenly stops being learned (loss does not change). What would be the possible reason? ```python def forward(self, x, k: int = None): x, scaling_factor = self.pre_linear_act(x) # QuantAct x, scaling_factor = self.linear(x, scaling_factor) # QuantLinear x, scaling_factor = self.post_linear_act(x, scaling_factor) # QuantAct # normalize if self.normalize: x, scaling_factor = self.layer_norm(x, scaling_factor) # IntLayerNorm x, scaling_factor = self.post_layernorm_act(x, scaling_factor) # QuantAct ``` ## Update I observed that the output tensor of `self.layer_norm` is all zero.
04-25-2021 05:01:59
04-25-2021 05:01:59
It is likely that the IntLayerNorm layers are not warmed up. IntLayerNorm layer has to adjust its internal parameter (`self.shift`) during the quantization-aware training process. It is the one that prevents overflow (i.e. keeps the internal activation values to be less than 2**32) and is initialized with zero. Here is the relevant part in the code: https://github.com/huggingface/transformers/blob/master/src/transformers/models/ibert/quant_modules.py#L508 Therefore, if you skip the quantization-aware training process and immediately use the model for inference, those layers may produce some unexpected outcomes. Could this be your case?<|||||>Yeah I'm not using quantization-aware training so that'll be the reason. Thanks for the answer!
transformers
11,422
closed
Transformers 4.1.1 & Tensorflow 2.0, AttributeError: module'tensorflow_core.keras.activations' has no attribute'swish'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Mac - Python version: 3.6 - PyTorch version (GPU?): No - Tensorflow version (GPU?): 2.0.0 No GPU - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information @Rocketknight1 @sgugger I had to use `tensorflow2.0` for some reasons. I checked all transfomer released versions and found that `before 4.1.1`, tensorflow>=2.0, after 4.1.1, tensorflow>=2.3 (in `setup.py`), so I `4.1.1` is installed. When I run ``` from transformers import AutoTokenizer, AutoModel code ``` I raise ``` AttributeError: module'tensorflow_core.keras.activations' has no attribute'swish' ``` So I checked the `keras` document of tf2.0 (https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/activations), `there is indeed no swish function`, but it does exist in the `"transformers\activations_tf.py"`(line 64~74) in transformers `4.1.1`: ``` ACT2FN = { "gelu": tf.keras.layers.Activation(gelu), "relu": tf.keras.activations.relu, "swish": tf.keras.activations.swish, "silu": tf.keras.activations.swish, "gelu_new": tf.keras.layers.Activation(gelu_new), "mish": tf.keras.layers.Activation(mish), "tanh": tf.keras.activations.tanh, "gelu_fast": tf.keras.layers.Activation(gelu_fast), } ``` ## Expected behavior When I modify `activations_tf.py`, it seems like ok... ``` ACT2FN = { "gelu": tf.keras.layers.Activation(gelu), "relu": tf.keras.activations.relu, # "swish": tf.keras.activations.swish, # "silu": tf.keras.activations.swish, "gelu_new": tf.keras.layers.Activation(gelu_new), "mish": tf.keras.layers.Activation(mish), "tanh": tf.keras.activations.tanh, "gelu_fast": tf.keras.layers.Activation(gelu_fast), } ``` or define `swish` and `silu` in `activations_tf.py` like ``` def swish(): xxxx def silu(): xxxx ACT2FN = { "gelu": tf.keras.layers.Activation(gelu), "relu": tf.keras.activations.relu, "swish": tf.keras.layers.Activation(swish), "silu": tf.keras.layers.Activation(swish), "gelu_new": tf.keras.layers.Activation(gelu_new), "mish": tf.keras.layers.Activation(mish), "tanh": tf.keras.activations.tanh, "gelu_fast": tf.keras.layers.Activation(gelu_fast), } ``` I don’t know if there is such a bug. @Rocketknight1 @sgugger. <!-- A clear and concise description of what you would expect to happen. -->
04-25-2021 03:56:29
04-25-2021 03:56:29
The setup was updated super late: we already required tensorflow >= 2.3 for a while when we finally went to it. I don't know which version of Transformers supports tensorflow 2.0 but I would guess it's 3.0 or even below.<|||||>Okay, hope the document can be clearer. Every time I read `README.md`, it says `This repository is tested on Python 3.6+, PyTorch 1.0.0+ (PyTorch 1.3.1+ for examples) and TensorFlow 2.0` similar words, I really think it is the specific version 2.0 of tensorflow. However, it is not(tensorflow>=2.3 in `setup.py`). It would be better if the version information is clearer I guess. Hope `transformers` get better and better.<|||||>Yes, this part of the README has not been updated in a while (the PyTorch version is also wrong). Will adjust!<|||||>> Yes, this part of the README has not been updated in a while (the PyTorch version is also wrong). Will adjust! Thanks 🤗.
transformers
11,421
closed
Race condition when using --save_total_limit, --load_best_model_at_end and deepspeed zero2+cpu_offload
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: Linux-5.4.0-1045-aws-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.1+cu110 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes, AWS p4d.24xlarge - Using distributed or parallel set-up in script?: yes, deepspeed ### Who can help Library: - deepspeed: @stas00 - trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): roberta-large The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce I'm fine-tuning using run_mlm.py. A race condition seems to exist when: 1. you limit the number of checkpoints with `--save_total_limit` 2. you enable `--load_best_model_at_end --metric_for_best_model eval_loss` 3. you use multigpu training with deepspeed zero2 + cpu_offload 4. when the best model happens to be at the head of the list returned by Trainer._sorted_checkpoints() This corner case happens because the checkpoint being deleted is the most recent one due to the swapping logic in `Trainer._sorted_checkpoints()` at https://github.com/huggingface/transformers/blob/bf2e0cf70b68e0d46cdf15a4ece1f5c0a03de084/src/transformers/trainer.py#L1818-L1821 When (by chance) the `best_model_index == 0`, the swapping logic will cause the most recent checkpoint to go to the head of the list. When `Trainer._rotate_checkpoints()` is then called, it starts deleting from the head and consequently deletes the most recent checkpoint. (Aside: this is actually probably another bug in itself -- you would never be able to resume training from the most recent checkpoint.) However, at this point, deepspeed has not finished writing its own global_checkpoint to the current checkpoint directory, causing the following error to be thrown: ``` INFO|trainer.py:1648] 2021-04-25 00:08:06,377 >> Saving model checkpoint to /mnt/experiments/roberta-large-mlm/checkpoint-23000 [INFO|configuration_utils.py:329] 2021-04-25 00:08:06,378 >> Configuration saved in /mnt/experiments/roberta-large-mlm/checkpoint-23000/config.json [INFO|modeling_utils.py:831] 2021-04-25 00:08:09,054 >> Model weights saved in /mnt/experiments/roberta-large-mlm/checkpoint-23000/pytorch_model.bin [INFO|tokenization_utils_base.py:1901] 2021-04-25 00:08:09,055 >> tokenizer config file saved in /mnt/experiments/roberta-large-mlm/checkpoint-23000/tokenizer_config.json [INFO|tokenization_utils_base.py:1907] 2021-04-25 00:08:09,055 >> Special tokens file saved in /mnt/experiments/roberta-large-mlm/checkpoint-23000/special_tokens_map.json [2021-04-25 00:08:09,211] [INFO] [logging.py:60:log_dist] [Rank 0] Saving model checkpoint: /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/mp_rank_00_model_states.pt [2021-04-25 00:08:13,004] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,004] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_0_mp_rank_00_optim_states.pt [INFO|trainer.py:1715] 2021-04-25 00:08:13,012 >> Deleting older checkpoint [/mnt/experiments/roberta-large-mlm/checkpoint-23000] due to args.save_total_limit [2021-04-25 00:08:13,015] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,016] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_5_mp_rank_00_optim_states.pt [2021-04-25 00:08:13,035] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,036] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_4_mp_rank_00_optim_states.pt [2021-04-25 00:08:13,148] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,148] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_1_mp_rank_00_optim_states.pt [2021-04-25 00:08:13,192] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,193] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_7_mp_rank_00_optim_states.pt [2021-04-25 00:08:13,193] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,194] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_2_mp_rank_00_optim_states.pt [2021-04-25 00:08:13,219] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,220] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_6_mp_rank_00_optim_states.pt [2021-04-25 00:08:13,330] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,331] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_3_mp_rank_00_optim_states.pt Traceback (most recent call last): File "run_mlm.py", line 535, in <module> main() File "run_mlm.py", line 482, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/cklin/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1172, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/home/cklin/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1269, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/home/cklin/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1346, in _save_checkpoint self._rotate_checkpoints(use_mtime=True, output_dir=run_dir) File "/home/cklin/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1716, in _rotate_checkpoints shutil.rmtree(checkpoint) File "/usr/lib/python3.6/shutil.py", line 490, in rmtree onerror(os.rmdir, path, sys.exc_info()) File "/usr/lib/python3.6/shutil.py", line 488, in rmtree os.rmdir(path) OSError: [Errno 39] Directory not empty: '/mnt/experiments/roberta-large-mlm/checkpoint-23000' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Instead of swapping logic in the lines referenced above, `Trainer._sort_checkpoints()` might instead do ``` checkpoints_sorted.append(checkpoints_sorted[best_model_index]) checkpoints_sorted.remove(checkpoints_sorted[best_model_index]) ``` i.e., just move the best model to the end of the list. I believe this will guarantee that the checkpoints (excluding the best model) will be deleted earliest first. <!-- A clear and concise description of what you would expect to happen. -->
04-25-2021 00:38:17
04-25-2021 00:38:17
I believe that is a correct workaround. Would you like to make a PR with it?<|||||>Sure, happy to.<|||||>@chitkwan, are you still inspired to make a PR to fix this? Thank you!<|||||>Oh this has been fixed in #11748 I believe. Sorry I did not reference it in this issue.<|||||>ah yes! @chitkwan, could you please validate that the `master` branch with the fix works for you and close this issue if it is so? Thank you!<|||||>Sorry -- this fell off my todo list but thank you for the fix. The original race condition I reported may not be easy to reproduce but I'll give it a go and report back. <|||||>I reran my failure condition and it no longer fails, so I think this can be closed. Thanks!
transformers
11,420
closed
[Question] Implementing character based tokenizer
Hi team, what's the recommended approach for implementing a character based tokenizer? Thanks
04-24-2021 21:53:18
04-24-2021 21:53:18
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.