repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
9,410
closed
`pip install -e .[dev]` in Python 3.9.1+ fails because `jaxlib==0.1.55` cannot be found
## Environment info - `transformers` version: 4.2.0dev0 (the error is during the installation) - Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10 - Python version: 3.9.1 (the error occurs) -> 3.8.0 (the error does not occur) - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): 2.4.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help documentation: @sgugger ## Information This is the report of a bug that I encountered during the [dev] version of `transformers`. I try to create a conda environment to install `transformer [dev]` by `pip install -e .[dev]`, but failed due to the `jaxlib` version. ## To reproduce Git clone the forked `transformers` and update it to be `This branch is even with huggingface:master.` ``` sh $ git clone [email protected]:forest1988/transformers.git forest1988_transformers $ cd forest1988_transformers/ $ git remote add upstream https://github.com/huggingface/transformers.git $ git pull upstream main $ git pull upstream master $ git push origin master ``` Create a new conda env. ``` sh $ conda create -n transformers-for-contribute $ conda activate transformers-for-contribute ``` Then, try to install `transformers [dev]` by `pip install -e .[dev]`. ``` sh (transformers-for-contribute) ****@**** $ conda install pip Collecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan ## environment location: ****/.pyenv/versions/anaconda3-2020.07/envs/transformers-for-contribute added / updated specs: - pip The following packages will be downloaded: package | build ---------------------------|----------------- ca-certificates-2020.12.8 | h06a4308_0 121 KB certifi-2020.12.5 | py39h06a4308_0 140 KB openssl-1.1.1i | h27cfd23_0 2.5 MB pip-20.3.3 | py39h06a4308_0 1.8 MB python-3.9.1 | hdb3f193_2 18.1 MB setuptools-51.0.0 | py39h06a4308_2 726 KB wheel-0.36.2 | pyhd3eb1b0_0 33 KB ------------------------------------------------------------ Total: 23.4 MB The following NEW packages will be INSTALLED: _libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main ca-certificates pkgs/main/linux-64::ca-certificates-2020.12.8-h06a4308_0 certifi pkgs/main/linux-64::certifi-2020.12.5-py39h06a4308_0 ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.33.1-h53a641e_7 libedit pkgs/main/linux-64::libedit-3.1.20191231-h14c3975_1 libffi pkgs/main/linux-64::libffi-3.3-he6710b0_2 libgcc-ng pkgs/main/linux-64::libgcc-ng-9.1.0-hdf63c60_0 libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.1.0-hdf63c60_0 ncurses pkgs/main/linux-64::ncurses-6.2-he6710b0_1 openssl pkgs/main/linux-64::openssl-1.1.1i-h27cfd23_0 pip pkgs/main/linux-64::pip-20.3.3-py39h06a4308_0 python pkgs/main/linux-64::python-3.9.1-hdb3f193_2 readline pkgs/main/linux-64::readline-8.0-h7b6447c_0 setuptools pkgs/main/linux-64::setuptools-51.0.0-py39h06a4308_2 sqlite pkgs/main/linux-64::sqlite-3.33.0-h62c20be_0 tk pkgs/main/linux-64::tk-8.6.10-hbc83047_0 tzdata pkgs/main/noarch::tzdata-2020d-h14c3975_0 wheel pkgs/main/noarch::wheel-0.36.2-pyhd3eb1b0_0 xz pkgs/main/linux-64::xz-5.2.5-h7b6447c_0 zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3 Proceed ([y]/n)? y Downloading and Extracting Packages pip-20.3.3 | 1.8 MB | ################################################################################################################################################################# | 100% ca-certificates-2020 | 121 KB | ################################################################################################################################################################# | 100% python-3.9.1 | 18.1 MB | ################################################################################################################################################################# | 100% certifi-2020.12.5 | 140 KB | ################################################################################################################################################################# | 100% setuptools-51.0.0 | 726 KB | ################################################################################################################################################################# | 100% wheel-0.36.2 | 33 KB | ################################################################################################################################################################# | 100% openssl-1.1.1i | 2.5 MB | ################################################################################################################################################################# | 100% Preparing transaction: done Verifying transaction: done Executing transaction: done (transformers-for-contribute) ****@**** $ pwd ****/workspace/Clone/forest1988_transformers (transformers-for-contribute) ****@**** $ pip install -e ".[dev]" Obtaining file:///****/workspace/Clone/forest1988_transformers Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done ERROR: Could not find a version that satisfies the requirement jaxlib==0.1.55; extra == "dev" (from transformers[dev]) ERROR: No matching distribution found for jaxlib==0.1.55; extra == "dev" (transformers-for-contribute) ****@**** $ pip install jaxlib==0.1.55 ERROR: Could not find a version that satisfies the requirement jaxlib==0.1.55 ERROR: No matching distribution found for jaxlib==0.1.55 ``` When I downgraded the python to 3.8 by `conda install python==3.8`, then `pip install -e ".[dev]"` works. I tried other versions of python installed via conda: - `conda install python==3.7` : OK - `conda install python==3.9` : the same error occurs ## Expected behavior Depending on the version of python we are using, we may find that the version of `jaxlib` specified in [`setup.py`](https://github.com/huggingface/transformers/blob/master/setup.py) is missing, and it causes `pip install -e .[dev]` failure. For the `transformers [dev]`, is it better not to use python 3.9+? (I apologize if I missed the explanation) If I change `jaxlib==0.1.55` to `jaxlib>=0.1.55` in `setup.py`, will it cause problems elsewhere?
01-05-2021 02:00:39
01-05-2021 02:00:39
I retried to install `transformers [dev]` with Python 3.9.1. The latest `git tag` in the cloned repository is `v4.2.1`. I assumed that the same error would occur, but in this time it failed in installing `tensorflow`. ``` sh ****@**** $ conda create -n transformers-py39-dev Collecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan ## environment location: ****/.pyenv/versions/anaconda3-2020.07/envs/transformers-py39-dev Proceed ([y]/n)? y Preparing transaction: done Verifying transaction: done Executing transaction: done # # To activate this environment, use # # $ conda activate transformers-py39-dev # # To deactivate an active environment, use # # $ conda deactivate ****@**** $ conda activate transformers-py39-dev (transformers-py39-dev) ****@**** $ conda install pip Collecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan ## environment location: ****/.pyenv/versions/anaconda3-2020.07/envs/transformers-py39-dev added / updated specs: - pip The following packages will be downloaded: package | build ---------------------------|----------------- setuptools-51.1.2 | py39h06a4308_4 743 KB ------------------------------------------------------------ Total: 743 KB The following NEW packages will be INSTALLED: _libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main ca-certificates pkgs/main/linux-64::ca-certificates-2020.12.8-h06a4308_0 certifi pkgs/main/linux-64::certifi-2020.12.5-py39h06a4308_0 ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.33.1-h53a641e_7 libedit pkgs/main/linux-64::libedit-3.1.20191231-h14c3975_1 libffi pkgs/main/linux-64::libffi-3.3-he6710b0_2 libgcc-ng pkgs/main/linux-64::libgcc-ng-9.1.0-hdf63c60_0 libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.1.0-hdf63c60_0 ncurses pkgs/main/linux-64::ncurses-6.2-he6710b0_1 openssl pkgs/main/linux-64::openssl-1.1.1i-h27cfd23_0 pip pkgs/main/linux-64::pip-20.3.3-py39h06a4308_0 python pkgs/main/linux-64::python-3.9.1-hdb3f193_2 readline pkgs/main/linux-64::readline-8.0-h7b6447c_0 setuptools pkgs/main/linux-64::setuptools-51.1.2-py39h06a4308_4 sqlite pkgs/main/linux-64::sqlite-3.33.0-h62c20be_0 tk pkgs/main/linux-64::tk-8.6.10-hbc83047_0 tzdata pkgs/main/noarch::tzdata-2020d-h14c3975_0 wheel pkgs/main/noarch::wheel-0.36.2-pyhd3eb1b0_0 xz pkgs/main/linux-64::xz-5.2.5-h7b6447c_0 zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3 ``` ``` sh (transformers-py39-dev) ****@**** $ pwd ****/workspace/Clone/transformers (transformers-py39-dev) ****@**** $ pip install -e ".[dev]" Obtaining file:///****/workspace/Clone/transformers Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done ERROR: Could not find a version that satisfies the requirement tensorflow>=2.3; extra == "dev" (from transformers[dev]) ERROR: No matching distribution found for tensorflow>=2.3; extra == "dev" ``` Is it possible that you have decided not to support Python 3.9+ at this time because of the compatibility with the libraries `transformers` depends on? I apologize if there is any misunderstanding. <|||||>You can use transformers without TensorFlow or FLAX installed, there is nothing in the code of transformers that is incompatible with Python 3.9. It looks like you want TensorFlow support for Python 3.9, which you should ask on the TensorFlow GitHub.<|||||>@sgugger Thank you for your comment. Excuse me for making you confused. It seems that there was a lack of information in my explanation. In this case, my aim is not to use transformers with TensorFlow or FLAX. What I'd like to do is install `transformers [dev]` to open PRs in the future, so I'm a bit confused about whether I can install it with Python 3.9+. I'm not familiar with installing a `[dev]` version software, so I opened this issue to ask if we can install the [dev] version transformers with Python 3.9+ and open PRs using it. I can use Python <= 3.8, so this question is not urgent. I apologize for making you confused.<|||||>You will be able to open PRs without installing `transformers [dev]`, it just mean you won't be able to run all the tests locally. `pip install transformers [torch, sentencepiece, tokenizers, testing, quality, ja, docs, sklearn, modelcreation]` might work to install all the depencies except TensorFlow and Flax (I just took all what is in dev and removed TensorFlow and Flax to create this command) but no guarantee. If you're not an advanced user, I would recommend sticking with Python 3.6 to 3.8 while waiting for TensorFlow and Flax to support Python 3.9, as installing things with it might have some challenges :-)<|||||>Hi @sgugger, Thank you for telling me how to install it! When I tried to open a PR before, the auto-formatting of the code didn't work properly (I think it was when I tried to open a PR in `datasets`, not in `transformer`), and I assumed that I had to use `[dev]` versions when I want to open a PR. Now I think that the matter was caused by my have not installed the proper version of `testing`, `quality`, and `docs` then. I would like to become an advanced user eventually, but not now, so I would like to use Python 3.6 to 3.8 for now. Thanks again! <|||||>Yep getting the same error, On a fresh 3.9 Python conda: `ERROR: No matching distribution found for jaxlib==0.1.55; extra == "dev"`<|||||>To fix it, I moved back to Python 3.8.8, then `pip install -e ".[dev]"` worked fine
transformers
9,409
closed
[trainer] group fp16 args together
this PR proposes a purely cosmetic change that puts all the fp16 args together - so they are easier to manage/read @sgugger
01-05-2021 01:14:38
01-05-2021 01:14:38
Thanks!
transformers
9,408
closed
[autoformatters] wrapping destroying items/lists
# 🚀 Feature request Would it be possible to make the auto-wrappers respect items/lists? e.g. ended up with: ``` """ number of training steps is either 1. args.max_steps if --max_steps > 1 2. else derive from dataset if we can get its size """ ``` not only it's broken, it's unreadable. The original was: ``` """ number of training steps is either 1. args.max_steps if --max_steps > 1 2. else derive from dataset if we can get its size """ ``` Ideally it should not remove new lines before bullets */- and numbers 1. I also am not sure why there is a need to merge lines when the writer meant them to be shorter. I get the shortening, but why can't short lines be left alone. Which would be the case in this example. It looks like the only way I can enforce readable content is to inject paragraphs. Thank you! @sgugger
01-04-2021 23:42:32
01-04-2021 23:42:32
The styling script does indeed break your list in this instance, which is kind of a bug that is a feature instead. Let me explain. If you use sphinx to convert this docstring to HTML, here is the result that it will produce: ![image](https://user-images.githubusercontent.com/35901082/103658226-ef827800-4f38-11eb-9388-7a311b30e00d.png) So the styler is really only showing you in advance there is going to be a problem with your list when putting everything in the same paragraph. To avoid the breaking (and properly rendering your list in the docs), you have to add a new empty line: ``` """ number of training steps is either 1. args.max_steps if --max_steps > 1 2. else derive from dataset if we can get its size """ ``` I know it's a bit annoying for docstrings that are just there as internal documentation and not really designed to be shown in the main documentation, but the script can't guess which docstrings to check and which not... > I also am not sure why there is a need to merge lines when the writer meant them to be shorter. Again, this will be shown as one paragraph in the actual documentation. If you want to keep lines separated, they need to have an extra new line in-between.<|||||>Thank you for explaining that it is the sphinx that is lacking. Could the autoformatter detect such situations and fix that so that it remains a list by inserting a new line, rather than unwrapping the whole thing? If such a parser would be complicated we could make it easier by having a stricter format. Usually, in English a proper list is preceded by a colon as in: ``` Here is what you do: 1. .... 2. .... ``` So `r':\s*[\r\n]+\s+(\d+\.|[\-\*] )'` would match 3 types of lists.
transformers
9,407
closed
Allow example to use a revision and work with private models
# What does this PR do? This PR adds the ability to: - pick a particular revision for a model checkpoint - use private models when the user is logged in with the example scripts. Just did `run_glue` as a proof of concept, will duplicate to all examples and the new example template if this suits everyone.
01-04-2021 22:33:28
01-04-2021 22:33:28
+1 on the `model_revision` part On the `use_auth_token`, I was thinking we could try to implement auto-use of the token _iff_ the model is not public (i.e. send the token for non-existent models and private models, as the server doesn't make a difference for those two cases when unauthorized – you get a 404 in both cases – said differently if you don't have access to a model you shouldn't know whether it's an existing private model) This will require some implementation changes in file_utils though so it might take a bit of time. If you think it's helpful to expose this PR's manual option first, I'm ok with that.<|||||>@LysandreJik Yep totally right! I won't personally get around to adding the feature in file_utils/huggingface_hub in the next 2-3 weeks though, so maybe worth it to merge it like this in the meantime:)<|||||>I think it's important to provide the option right now to let the user play with their private models for those scripts. We can have that flag become `None` later on and default to the right thing when the implementation in `file_utils` permits it then remove it entirely a bit later.<|||||>sounds good<|||||>Sounds good!
transformers
9,406
closed
Unable to train xlnet with tensorflow
## Environment info - `transformers` version: '2.0.0' - Platform: jupyter notebook - Python version: 3.7.6 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.1.0 GPU - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @sgugger, @TevenLeScao, @jplu ## Information Model I am using (Bert, XLNet ...): XLNet The problem arises when using: my own modified scripts: (give details below) The tasks I am working on is: my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` # I get my input, output from a dataframe. It's just a series of text and a series of # integers representing classes. x = df['description'] y_label = pd.Categorical(df['target']) y_cat = y_label.categories y = y_label.codes n_label = len(y_cat) # I use the tokenizer. Then convert it to a numpy array xlnet_tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") train_tokenized_inputs = [xlnet_tokenizer.encode(text) for text in x.values.tolist()] # It needs to be at least 1 and no more than 2000 train_max_length = max(1,min(np.array([len(inp) for inp in train_tokenized_inputs]).max(), 2000)) train_padded_inputs = (tf.keras.preprocessing.sequence.pad_sequences(train_tokenized_inputs, maxlen=train_max_length, value=0, padding='post', truncating='post',dtype='int32')) # I use the xlnet model clf = TFAutoModelForSequenceClassification.from_pretrained("xlnet-base-cased", num_labels=n_label) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) clf.compile(optimizer='adam',loss=loss) clf.fit(x=train_padded_inputs, y=y, batch_size=32, epochs=1, verbose=1, callbacks=None, validation_split=0.2, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False,) ``` The error message is: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-68-c147be84f56e> in <module> 15 max_queue_size=10, 16 workers=1, ---> 17 use_multiprocessing=False,) /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 817 max_queue_size=max_queue_size, 818 workers=workers, --> 819 use_multiprocessing=use_multiprocessing) 820 821 def evaluate(self, /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 233 max_queue_size=max_queue_size, 234 workers=workers, --> 235 use_multiprocessing=use_multiprocessing) 236 237 total_samples = _get_total_number_of_samples(training_data_adapter) /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing) 550 batch_size=batch_size, 551 check_steps=False, --> 552 steps=steps_per_epoch) 553 (x, y, sample_weights, 554 val_x, val_y, /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset) 2344 # First, we build the model on the fly if necessary. 2345 if not self.inputs: -> 2346 all_inputs, y_input, dict_inputs = self._build_model_with_inputs(x, y) 2347 is_build_called = True 2348 else: /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _build_model_with_inputs(self, inputs, targets) 2570 else: 2571 cast_inputs = inputs -> 2572 self._set_inputs(cast_inputs) 2573 return processed_inputs, targets, is_dict_inputs 2574 /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _set_inputs(self, inputs, outputs, training) 2657 kwargs['training'] = training 2658 try: -> 2659 outputs = self(inputs, **kwargs) 2660 except NotImplementedError: 2661 # This Model or a submodel is dynamic and hasn't overridden /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs) 771 not base_layer_utils.is_in_eager_or_tf_function()): 772 with auto_control_deps.AutomaticControlDependencies() as acd: --> 773 outputs = call_fn(cast_inputs, *args, **kwargs) 774 # Wrap Tensors in `outputs` in `tf.identity` to avoid 775 # circular dependencies. /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs) 235 except Exception as e: # pylint:disable=broad-except 236 if hasattr(e, 'ag_error_metadata'): --> 237 raise e.ag_error_metadata.to_exception(e) 238 else: 239 raise TypeError: in converted code: /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_xlnet.py:916 call * output = self.sequence_summary(output) /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:773 __call__ outputs = call_fn(cast_inputs, *args, **kwargs) /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:459 call * output = self.first_dropout(output) /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py:416 converted_call return py_builtins.overload_of(f)(*args) TypeError: 'NoneType' object is not callable ``` In addition, I tried to use TFTrainer in case I could solve my problem with it. `from transformers import TFTrainer` Gets this error message ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-51-aece35bcf827> in <module> ----> 1 from transformers import TFTrainer ImportError: cannot import name 'TFTrainer' from 'transformers' (/opt/conda/lib/python3.7/site-packages/transformers/__init__.py) ``` ## Expected behavior I expect the code to run and the model to be fine-tuned on my dataset. I expect that I shouldn't need the TFTrainer as the explanation on huggingface.co says the model is a standard tensorflow 2 layer. But I expect that I should be able to import it.
01-04-2021 21:26:47
01-04-2021 21:26:47
Hello! Can you try with master instead of the old `2.0.0` release? In order to know if the problem is still here or not.<|||||>By "with master", do you mean installed from source? git clone https://github.com/huggingface/transformers.git cd transformers pip install -e .<|||||>After installing from source transformers 4.2.0.dev0, I have this error: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-2-1503c944af5c> in <module> ----> 1 from transformers import AutoTokenizer, TFAutoModel ~/transformers/src/transformers/__init__.py in <module> 38 39 # Data ---> 40 from .data import ( 41 DataProcessor, 42 InputExample, ~/transformers/src/transformers/data/__init__.py in <module> 18 19 from .metrics import glue_compute_metrics, xnli_compute_metrics ---> 20 from .processors import ( 21 DataProcessor, 22 InputExample, ~/transformers/src/transformers/data/processors/__init__.py in <module> 18 19 from .glue import glue_convert_examples_to_features, glue_output_modes, glue_processors, glue_tasks_num_labels ---> 20 from .squad import SquadExample, SquadFeatures, SquadV1Processor, SquadV2Processor, squad_convert_examples_to_features 21 from .utils import DataProcessor, InputExample, InputFeatures, SingleSentenceClassificationProcessor 22 from .xnli import xnli_output_modes, xnli_processors, xnli_tasks_num_labels ~/transformers/src/transformers/data/processors/squad.py in <module> 22 23 from ...file_utils import is_tf_available, is_torch_available ---> 24 from ...models.bert.tokenization_bert import whitespace_tokenize 25 from ...tokenization_utils_base import BatchEncoding, PreTrainedTokenizerBase, TruncationStrategy 26 from ...utils import logging ~/transformers/src/transformers/models/bert/__init__.py in <module> 43 44 if is_tf_available(): ---> 45 from .modeling_tf_bert import ( 46 TF_BERT_PRETRAINED_MODEL_ARCHIVE_LIST, 47 TFBertEmbeddings, ~/transformers/src/transformers/models/bert/modeling_tf_bert.py in <module> 21 import tensorflow as tf 22 ---> 23 from ...activations_tf import get_tf_activation 24 from ...file_utils import ( 25 MULTIPLE_CHOICE_DUMMY_INPUTS, ~/transformers/src/transformers/activations_tf.py in <module> 66 "gelu": tf.keras.layers.Activation(gelu), 67 "relu": tf.keras.activations.relu, ---> 68 "swish": tf.keras.activations.swish, 69 "silu": tf.keras.activations.swish, 70 "gelu_new": tf.keras.layers.Activation(gelu_new), AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish' ``` It's due to the line of code: from transformers import AutoTokenizer, TFAutoModel<|||||>The next release of transformers (from source) now requires TF >= 2.3<|||||>It seems to work now, but I have a lot of warnings. Should I be worried about any of them? ``` WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fb1e4e82280>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fb1e4e82280>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fb1e4e82280>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). WARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss. ```<|||||>They look ok for me!<|||||>I'm fine-tuning the model on my dataset and the accuracy is 0.02 after one epoch and it didn't really change during training. Also, it takes 7 hours per epoch. I'm wondering if the low accuracy might be due to the things mentioned in the warnings. `WARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss.` If there are no gradients, it cannot learn. Do you know if it's really correct that all those layers have no gradient? I would expect the layers to have gradients. ``` WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fa069898210>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fa069898210>> and will run it as-is. ``` This warning seems to say that it is a bug worth mentioning to the TensorFlow team. Could it be the cause of the bad training time? `The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).` Do those parameters must be set? I have no set them or tried to modify them, I'm simply using the .fit method that all tensorflow 2 models have. <|||||>Also, do we need to use training=True somewhere? It's mentioned to use it when using the call of the model, but I'm calling .fit() rather than using the model's call, so I don't have this option as far as I know. I'm asking because the training doesn't seem to be working and I'm wondering if it could be the problem.<|||||>Unfortunately, no issues for me with `TFXLNetForSequenceClassification`, just test over the MRPC dataset and got around 0.93 accuracy on training and 0.83 accuracy on validation. The version I used is the master branch from source. The issue might come from the way you are training the model. Are you using the same one that you shared in your first post?<|||||>yes, but some parameters are different. For example, I needed a batchsize of 1 to fit in memory. Even 2 crashes. Also, I'm using callbacks. But I don't think it can make that the model doesn't learn. # Save the model after each epoch. ModelCheckpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath = self.params['save_model_weight_filepath']+'_{epoch:02d}.hdf5', monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', save_freq='epoch' ) # Stop when val loss stops decreasing. EarlyStopping_callback = tf.keras.callbacks.EarlyStopping( monitor='val_loss', min_delta=self.params['min_delta'], patience=self.params['patience'], verbose=0, mode='auto', baseline=None, restore_best_weights=True history = self.clf.fit(x=padded_inputs, y=y, batch_size=1, epochs=40, verbose=1, validation_split=0.2, max_queue_size=10, workers=-1, use_multiprocessing=True, callbacks=[ModelCheckpoint_callback, EarlyStopping_callback]) After epoch 2, the accuracies and losses are loss: 6.7701 - accuracy: 0.0197 - val_loss: 11.5031 - val_accuracy: 0.0025 Epoch 3 is still in process with loss: 6.7662 - accuracy: 0.0204 It doesn't seem to learn at all. Also, I have this warning `WARNING:tensorflow:Callbacks method `on_test_batch_end` is slow compared to the batch time (batch time: 0.0089s vs `on_test_batch_end` time: 0.6593s). Check your callbacks.` but none of my callbacks are used on batch_end, they are used on epoch ends, so infrequently and shouldn't affect the time too much.<|||||>Ok, from what I see in your script, the reason why your model don't learn anything is because the labels are not seen by the model which is normal with the way you set your dataset. The models in the lib have to be feed in a specific way, the data have to be a `Tuple(x, y)` where `x` can be either a list or a dict with tf.Tensor or np.ndarray, same for `y`. And then feed your model with: ```python history = model.fit( train_dataset, epochs=3, ) ``` You can take example on how to do this in our examples or on our datasets website https://huggingface.co/docs/datasets/torch_tensorflow.html to know how to format your dataset.<|||||>I tried that (using a tuple with x and y, my x and y were already numpy arrays) and I got an error. ``` ~/ticket-analysis-releasev3/ticket-analysis/src/model/xlnet/xlnet.py in fit(self, df) 181 workers=self.params['workers'], 182 use_multiprocessing=self.params['use_multiprocessing'], --> 183 callbacks=[ModelCheckpoint_callback, EarlyStopping_callback]) 184 self.history_df = pd.DataFrame({'epochs':history.epoch, 'loss': history.history['loss'], 185 'validation_loss': history.history['val_loss'], 'accuracy': history.history['accuracy'], ~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs) 106 def _method_wrapper(self, *args, **kwargs): 107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access --> 108 return method(self, *args, **kwargs) 109 110 # Running inside `run_distribute_coordinator` already. ~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1096 batch_size=batch_size): 1097 callbacks.on_train_batch_begin(step) -> 1098 tmp_logs = train_function(iterator) 1099 if data_handler.should_sync: 1100 context.async_wait() ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 778 else: 779 compiler = "nonXla" --> 780 result = self._call(*args, **kwds) 781 782 new_tracing_count = self._get_tracing_count() ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 821 # This is the first call of __call__, so we have to initialize. 822 initializers = [] --> 823 self._initialize(args, kwds, add_initializers_to=initializers) 824 finally: 825 # At this point we know that the initialization is complete (or less ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 695 self._concrete_stateful_fn = ( 696 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access --> 697 *args, **kwds)) 698 699 def invalid_creator_scope(*unused_args, **unused_kwds): ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 2853 args, kwargs = None, None 2854 with self._lock: -> 2855 graph_function, _, _ = self._maybe_define_function(args, kwargs) 2856 return graph_function 2857 ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 3211 3212 self._function_cache.missed.add(call_context_key) -> 3213 graph_function = self._create_graph_function(args, kwargs) 3214 self._function_cache.primary[cache_key] = graph_function 3215 return graph_function, args, kwargs ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 3073 arg_names=arg_names, 3074 override_flat_arg_shapes=override_flat_arg_shapes, -> 3075 capture_by_value=self._capture_by_value), 3076 self._function_attributes, 3077 function_spec=self.function_spec, ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 984 _, original_func = tf_decorator.unwrap(python_func) 985 --> 986 func_outputs = python_func(*func_args, **func_kwargs) 987 988 # invariant: `func_outputs` contains only Tensors, CompositeTensors, ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds) 598 # __wrapped__ allows AutoGraph to swap in a converted function. We give 599 # the function a weak reference to itself to avoid a reference cycle. --> 600 return weak_wrapped_fn().__wrapped__(*args, **kwds) 601 weak_wrapped_fn = weakref.ref(wrapped_fn) 602 ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 971 except Exception as e: # pylint:disable=broad-except 972 if hasattr(e, "ag_error_metadata"): --> 973 raise e.ag_error_metadata.to_exception(e) 974 else: 975 raise ValueError: in user code: /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:806 train_function * return step_function(self, iterator) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:796 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica return fn(*args, **kwargs) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:789 run_step ** outputs = model.train_step(data) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:757 train_step self.trainable_variables) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:2737 _minimize trainable_variables)) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:562 _aggregate_gradients filtered_grads_and_vars = _filter_grads(grads_and_vars) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1271 _filter_grads ([v.name for _, v in grads_and_vars],)) ValueError: No gradients provided for any variable: ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/word_embedding/weight:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/sequence_summary/summary/kernel:0', 'tfxl_net_for_sequence_classification/sequence_summary/summary/bias:0', 'tfxl_net_for_sequence_classification/logits_proj/kernel:0', 'tfxl_net_for_sequence_classification/logits_proj/bias:0']. ```<|||||>I also tried with dataset, but I got this error ``` Some layers from the model checkpoint at xlnet-base-cased were not used when initializing TFXLNetForSequenceClassification: ['lm_loss'] - This IS expected if you are initializing TFXLNetForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing TFXLNetForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some layers of TFXLNetForSequenceClassification were not initialized from the model checkpoint at xlnet-base-cased and are newly initialized: ['sequence_summary', 'logits_proj'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Epoch 1/40 WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f21748f1210>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f21748f1210>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-6-db016531efec> in <module> 1 # Try to use tensorflow dataset 2 st = time.time() ----> 3 model.fit(df) 4 print('time', time.time()-st) ~/ticket-analysis-releasev3/ticket-analysis/src/model/xlnet/xlnet.py in fit(self, df) 187 workers=self.params['workers'], 188 use_multiprocessing=self.params['use_multiprocessing'], --> 189 callbacks=[ModelCheckpoint_callback, EarlyStopping_callback]) 190 self.history_df = pd.DataFrame({'epochs':history.epoch, 'loss': history.history['loss'], 191 'validation_loss': history.history['val_loss'], 'accuracy': history.history['accuracy'], ~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs) 106 def _method_wrapper(self, *args, **kwargs): 107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access --> 108 return method(self, *args, **kwargs) 109 110 # Running inside `run_distribute_coordinator` already. ~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1096 batch_size=batch_size): 1097 callbacks.on_train_batch_begin(step) -> 1098 tmp_logs = train_function(iterator) 1099 if data_handler.should_sync: 1100 context.async_wait() ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 778 else: 779 compiler = "nonXla" --> 780 result = self._call(*args, **kwds) 781 782 new_tracing_count = self._get_tracing_count() ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 821 # This is the first call of __call__, so we have to initialize. 822 initializers = [] --> 823 self._initialize(args, kwds, add_initializers_to=initializers) 824 finally: 825 # At this point we know that the initialization is complete (or less ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 695 self._concrete_stateful_fn = ( 696 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access --> 697 *args, **kwds)) 698 699 def invalid_creator_scope(*unused_args, **unused_kwds): ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 2853 args, kwargs = None, None 2854 with self._lock: -> 2855 graph_function, _, _ = self._maybe_define_function(args, kwargs) 2856 return graph_function 2857 ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 3211 3212 self._function_cache.missed.add(call_context_key) -> 3213 graph_function = self._create_graph_function(args, kwargs) 3214 self._function_cache.primary[cache_key] = graph_function 3215 return graph_function, args, kwargs ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 3073 arg_names=arg_names, 3074 override_flat_arg_shapes=override_flat_arg_shapes, -> 3075 capture_by_value=self._capture_by_value), 3076 self._function_attributes, 3077 function_spec=self.function_spec, ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 984 _, original_func = tf_decorator.unwrap(python_func) 985 --> 986 func_outputs = python_func(*func_args, **func_kwargs) 987 988 # invariant: `func_outputs` contains only Tensors, CompositeTensors, ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds) 598 # __wrapped__ allows AutoGraph to swap in a converted function. We give 599 # the function a weak reference to itself to avoid a reference cycle. --> 600 return weak_wrapped_fn().__wrapped__(*args, **kwds) 601 weak_wrapped_fn = weakref.ref(wrapped_fn) 602 ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 971 except Exception as e: # pylint:disable=broad-except 972 if hasattr(e, "ag_error_metadata"): --> 973 raise e.ag_error_metadata.to_exception(e) 974 else: 975 raise ValueError: in user code: /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:806 train_function * return step_function(self, iterator) /home/jovyan/transformers/src/transformers/models/xlnet/modeling_tf_xlnet.py:1452 call * transformer_outputs = self.transformer( /home/jovyan/transformers/src/transformers/models/xlnet/modeling_tf_xlnet.py:625 call * inputs["input_ids"] = tf.transpose(inputs["input_ids"], perm=(1, 0)) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper ** return target(*args, **kwargs) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:2107 transpose_v2 return transpose(a=a, perm=perm, name=name, conjugate=conjugate) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper return target(*args, **kwargs) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:2188 transpose return transpose_fn(a, perm, name=name) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py:11535 transpose "Transpose", x=x, perm=perm, name=name) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:744 _apply_op_helper attrs=attr_protos, op_def=op_def) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py:593 _create_op_internal compute_device) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:3485 _create_op_internal op_def=op_def) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:1975 __init__ control_input_ops, op_def) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:1815 _create_c_op raise ValueError(str(e)) ValueError: Dimension must be 3 but is 2 for '{{node tfxl_net_for_sequence_classification/transformer/transpose}} = Transpose[T=DT_INT32, Tperm=DT_INT32](IteratorGetNext, tfxl_net_for_sequence_classification/transformer/transpose/perm)' with input shapes: [1,11929,2000], [2]. ``` For dataset, I used this code ``` train_dataset = (padded_inputs, y) train_dataset, val_dataset = train_test_split(train_dataset, test_size=0.02) train_dataset = tf.data.Dataset.from_tensors(train_dataset) val_dataset = tf.data.Dataset.from_tensors(val_dataset) # Fit model self.clf = TFAutoModelForSequenceClassification.from_pretrained("xlnet-base-cased", num_labels=self.n_label) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) self.clf.compile(optimizer='adam',loss=loss,metrics=['accuracy']) history = self.clf.fit( train_dataset, #x=padded_inputs, y=y, validation_data = val_dataset, batch_size=self.params['batchsize'], epochs=self.params['epochs'], verbose=1, #validation_split=self.params['validation_split'], max_queue_size=self.params['max_queue_size'], workers=self.params['workers'], use_multiprocessing=self.params['use_multiprocessing'], callbacks=[ModelCheckpoint_callback, EarlyStopping_callback]) ```<|||||>This is still not ok. Here an example for MRPC, from which your can take inspiration from: ``` import tensorflow as tf from datasets import load_dataset from transformers import AutoTokenizer dataset = load_dataset('glue', 'mrpc', split='train') tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') dataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True) dataset.set_format(type='numpy', columns=['input_ids', 'token_type_ids', 'attention_mask', 'label']) features = {x: dataset[x] for x in ['input_ids', 'token_type_ids', 'attention_mask']} tfdataset = tf.data.Dataset.from_tensor_slices((features, dataset["label"])).batch(1) ``` And your data must have the shape: ``` <BatchDataset shapes: ({input_ids: (None, 512), token_type_ids: (None, 512), attention_mask: (None, 512)}, (None,)), types: ({input_ids: tf.int64, token_type_ids: tf.int64, attention_mask: tf.int64}, tf.int32)> ``` Here it is a tuple, where the first element is a dict that has tensors (built from numpy arrays), and the label is a label id.<|||||>What kind of format or object is the dataset obtained from dataset = load_dataset('glue', 'mrpc', split='train') ? I'm not loading a public dataset but using my own so I can't take this part from the code. Do you know how I can generate it from a input numpy array and a label numpy array? Also, I don't think xlnet uses 'token_type_ids', 'attention_mask', should I use dataset.set_format(type='numpy', columns=['input_ids', 'label']) features = {x: dataset[x] for x in ['input_ids']} <|||||>The example I gave is just to show you how should looks like your dataset. And yes XLNet can take both attention_mask and token_type_ids arguments. The steps are simple: 1. Tokenize your dataset 2. Create a tf.data.Dataset and format it to make it looking like I showed you: `({"input_ids": [[ex1],[ex2],...], "attention_mask":[[ex1],[ex2],...], "token_type_ids":[ex1],[ex2],...]}, [label_id_ex1, label_id_ex_2,...])` 3. Run your training with `model.fit(training_dataset, epochs=3)` And that's it. <|||||>Okay, I'm using x_train, x_val, y_train, y_val = train_test_split(x, y, test_size=0.02) tokenized_inputs = [xlnet_tokenizer.encode(text) for text in x_train.values.tolist()] max_length = max(1,min(np.array([len(inp) for inp in tokenized_inputs]).max(), self.params['MAX_LENGTH'])) padded_inputs = (tf.keras.preprocessing.sequence.pad_sequences(tokenized_inputs, maxlen=max_length, value=0, padding='post', truncating='post',dtype='int32')) train_dataset = tf.data.Dataset.from_tensor_slices(({"input_ids":padded_inputs}, y_train)).batch(1) val_tokenized_inputs = [xlnet_tokenizer.encode(text) for text in x_val.values.tolist()] val_padded_inputs = (tf.keras.preprocessing.sequence.pad_sequences(val_tokenized_inputs, maxlen=max_length, value=0, padding='post', truncating='post',dtype='int32')) val_dataset = tf.data.Dataset.from_tensor_slices(({"input_ids":x_val}, y_val)).batch(1) print('dataset',train_dataset,val_dataset) history = self.clf.fit( train_dataset, validation_data = val_dataset, batch_size=self.params['batchsize'], epochs=self.params['epochs'], verbose=1, max_queue_size=self.params['max_queue_size'], workers=self.params['workers'], use_multiprocessing=self.params['use_multiprocessing'], callbacks=[ModelCheckpoint_callback, EarlyStopping_callback]) The format is <BatchDataset shapes: ({input_ids: (None, 2000)}, (None,)), types: ({input_ids: tf.int32}, tf.int16)> Should it work? Also, for the prediction, is it correct to use pred_dataset = tf.data.Dataset.from_tensor_slices(({"input_ids":x_train})).batch(1) since the model should not use the y at prediction time. I'm getting this error right now ``` --------------------------------------------------------------------------- InternalError Traceback (most recent call last) <ipython-input-6-db016531efec> in <module> 1 # Try to use tensorflow dataset 2 st = time.time() ----> 3 model.fit(df) 4 print('time', time.time()-st) ~/ticket-analysis-releasev3/ticket-analysis/src/model/xlnet/xlnet.py in fit(self, df) 168 169 # Fit model --> 170 self.clf = TFAutoModelForSequenceClassification.from_pretrained("xlnet-base-cased", num_labels=self.n_label) 171 loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) 172 self.clf.compile(optimizer='adam',loss=loss,metrics=['accuracy']) /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 384 return TFBertForSequenceClassification.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs) 385 elif 'xlnet' in pretrained_model_name_or_path: --> 386 return TFXLNetForSequenceClassification.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs) 387 elif 'xlm' in pretrained_model_name_or_path: 388 return TFXLMForSequenceClassification.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs) /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 266 267 inputs = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]]) --> 268 ret = model(inputs, training=False) # build the network with dummy inputs 269 270 assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file) ~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs) 983 984 with ops.enable_auto_cast_variables(self._compute_dtype_object): --> 985 outputs = call_fn(inputs, *args, **kwargs) 986 987 if self._activity_regularizer: /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_xlnet.py in call(self, inputs, **kwargs) 911 912 def call(self, inputs, **kwargs): --> 913 transformer_outputs = self.transformer(inputs, **kwargs) 914 output = transformer_outputs[0] 915 ~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs) 983 984 with ops.enable_auto_cast_variables(self._compute_dtype_object): --> 985 outputs = call_fn(inputs, *args, **kwargs) 986 987 if self._activity_regularizer: /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_xlnet.py in call(self, inputs, attention_mask, mems, perm_mask, target_mapping, token_type_ids, input_mask, head_mask, training) 607 608 ##### Positional encoding --> 609 pos_emb = self.relative_positional_encoding(qlen, klen, bsz=bsz, dtype=dtype_float) 610 pos_emb = self.dropout(pos_emb, training=training) 611 /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_xlnet.py in relative_positional_encoding(self, qlen, klen, bsz, dtype) 490 if self.clamp_len > 0: 491 fwd_pos_seq = tf.clip_by_value(fwd_pos_seq, -clamp_len, clamp_len) --> 492 pos_emb = self.positional_embedding(fwd_pos_seq, inv_freq, bsz) 493 494 return pos_emb /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_xlnet.py in positional_embedding(pos_seq, inv_freq, bsz) 437 @staticmethod 438 def positional_embedding(pos_seq, inv_freq, bsz=None): --> 439 sinusoid_inp = tf.einsum('i,d->id', pos_seq, inv_freq) 440 pos_emb = tf.concat([tf.sin(sinusoid_inp), tf.cos(sinusoid_inp)], axis=-1) 441 pos_emb = pos_emb[:, None, :] ~/.local/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs) 199 """Call target, and fall back on dispatchers if there is a TypeError.""" 200 try: --> 201 return target(*args, **kwargs) 202 except (TypeError, ValueError): 203 # Note: convert_to_eager_tensor currently raises a ValueError, not a ~/.local/lib/python3.7/site-packages/tensorflow/python/ops/special_math_ops.py in einsum(equation, *inputs, **kwargs) 682 - number of inputs or their shapes are inconsistent with `equation`. 683 """ --> 684 return _einsum_v2(equation, *inputs, **kwargs) 685 686 ~/.local/lib/python3.7/site-packages/tensorflow/python/ops/special_math_ops.py in _einsum_v2(equation, *inputs, **kwargs) 1111 if ellipsis_label: 1112 resolved_equation = resolved_equation.replace(ellipsis_label, '...') -> 1113 return gen_linalg_ops.einsum(inputs, resolved_equation) 1114 1115 # Send fully specified shapes to opt_einsum, since it cannot handle unknown ~/.local/lib/python3.7/site-packages/tensorflow/python/ops/gen_linalg_ops.py in einsum(inputs, equation, name) 1086 return _result 1087 except _core._NotOkStatusException as e: -> 1088 _ops.raise_from_not_ok_status(e, name) 1089 except _core._FallbackException: 1090 pass ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name) 6841 message = e.message + (" name: " + name if name is not None else "") 6842 # pylint: disable=protected-access -> 6843 six.raise_from(core._status_to_exception(e.code, message), None) 6844 # pylint: enable=protected-access 6845 /opt/conda/lib/python3.7/site-packages/six.py in raise_from(value, from_value) InternalError: Blas xGEMM launch failed : a.shape=[1,1,10], b.shape=[1,1,384], m=10, n=384, k=1 [Op:Einsum] ```<|||||>This error means that your GPU doesn't have enough RAM to run an einsum operation. But yes, your dataset looks better. Still, you are not properly using the tokenizer, use a proper way to use it: ``` from transformers import XLNetTokenizer tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased") tokenizer("hello") ``` To get tokenized data that looks like: ``` {'input_ids': [24717, 4, 3], 'token_type_ids': [0, 0, 2], 'attention_mask': [1, 1, 1]} ```<|||||>I'm getting ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-1-4dd755fb81bd> in <module> 1 from transformers import XLNetTokenizer 2 tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased") ----> 3 tokenizer("hello") TypeError: 'XLNetTokenizer' object is not callable ```<|||||>Which version of transformers are you using?<|||||>It went back to '2.0.0'. I don't know why but I'm trying to et 4.2.0 again.<|||||>Please stick to the 4.2.0 release :)<|||||>Do we need int64 or is int32 enough? I'm memory limited so anything that allow me to use less memory would help.<|||||>int32 is enough.<|||||>It is running again, but still not training. I'm using ``` x_train, x_val, y_train, y_val = train_test_split(x, y, test_size=self.params['validation_split']) # train tokenized_inputs = xlnet_tokenizer(x_train.values.tolist(), padding=True, max_length=self.params['MAX_LENGTH'], truncation=True) numpy_inputs = {x:np.array(tokenized_inputs[x]) for x in tokenized_inputs.keys()} train_dataset = tf.data.Dataset.from_tensor_slices((numpy_inputs, y_train)).batch(1) # val tokenized_inputs = xlnet_tokenizer(x_val.values.tolist(), padding=True, max_length=self.params['MAX_LENGTH'], truncation=True) numpy_inputs = {x:np.array(tokenized_inputs[x]) for x in tokenized_inputs.keys()} val_dataset = tf.data.Dataset.from_tensor_slices((numpy_inputs, y_val)).batch(1) print(train_dataset,val_dataset) history = self.clf.fit( train_dataset, validation_data = val_dataset, batch_size=self.params['batchsize'], epochs=self.params['epochs'], verbose=1, max_queue_size=self.params['max_queue_size'], workers=self.params['workers'], use_multiprocessing=self.params['use_multiprocessing'], callbacks=[ModelCheckpoint_callback, EarlyStopping_callback]) ``` The shapes are `<BatchDataset shapes: ({input_ids: (None, 500), token_type_ids: (None, 500), attention_mask: (None, 500)}, (None,)), types: ({input_ids: tf.int64, token_type_ids: tf.int64, attention_mask: tf.int64}, tf.int16)> <BatchDataset shapes: ({input_ids: (None, 500), token_type_ids: (None, 500), attention_mask: (None, 500)}, (None,)), types: ({input_ids: tf.int64, token_type_ids: tf.int64, attention_mask: tf.int64}, tf.int16)>` The results are: epochs | loss | validation_loss | accuracy | validation_accuracy -- | -- | -- | -- | -- 0 | 6.980901 | 7.149147 | 0.015823 | 0.047779 1 | 7.054768 | 7.217787 | 0.015194 | 0.047779 2 | 7.099029 | 7.474302 | 0.014880 | 0.047779 3 | 7.145690 | 7.359528 | 0.015509 | 0.047779 4 | 7.183013 | 7.395905 | 0.013937 | 0.005448 5 | 7.210382 | 7.452353 | 0.016137 | 0.047779 <|||||>Ok, now your data looks correct. Can you try with just: ``` self.clf.fit(train_dataset, epochs=self.params['epochs']) ``` If it still not working, try to use multiple other model such as Bert and see if it is still the cases. <|||||>It's not working with that, and it's not working with 'bert-base-uncased' either. This is the output for 'bert-base-uncased'. ``` Downloading: 100% 433/433 [00:02<00:00, 199B/s] Downloading: 100% 232k/232k [00:00<00:00, 1.25MB/s] Downloading: 100% 466k/466k [00:00<00:00, 1.32MB/s] dataset <BatchDataset shapes: ({input_ids: (None, 500), token_type_ids: (None, 500), attention_mask: (None, 500)}, (None,)), types: ({input_ids: tf.int32, token_type_ids: tf.int32, attention_mask: tf.int32}, tf.int16)> <BatchDataset shapes: ({input_ids: (None, 500), token_type_ids: (None, 500), attention_mask: (None, 500)}, (None,)), types: ({input_ids: tf.int32, token_type_ids: tf.int32, attention_mask: tf.int32}, tf.int16)> Downloading: 100% 536M/536M [00:11<00:00, 45.4MB/s] All model checkpoint layers were used when initializing TFBertForSequenceClassification. Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Epoch 1/40 WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f0c5c70d210>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f0c5c70d210>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. 9543/9543 [==============================] - 1784s 187ms/step - loss: 6.9231 - accuracy: 0.0181 Epoch 2/40 9543/9543 [==============================] - 1451s 152ms/step - loss: 7.0684 - accuracy: 0.0153 Epoch 3/40 9543/9543 [==============================] - 1157s 121ms/step - loss: 7.1112 - accuracy: 0.0171 Epoch 4/40 9543/9543 [==============================] - 1160s 122ms/step - loss: 7.1445 - accuracy: 0.0170 Epoch 5/40 9543/9543 [==============================] - 1160s 122ms/step - loss: 7.1869 - accuracy: 0.0159 ``` <|||||>Is there anything else that could cause the model not to learn?<|||||>The problem seems not to come from the models, I would guess that they might come from the way you build your data before the creation of the tf.data.Dataset or from the data themselves (sometime we cannot learn anything from the data, it happend). But I cannot be sure of anything without being able to reproduce the same behavior on my side sorry.<|||||>That's strange because I can get 32% accuracy on validation data using a baseline model that finds the closest text and predicts its label. So it should be possible to learn from the data.<|||||>Unfortunately, I cannot really help without being able to reproduce the issue, I'm sorry :(<|||||>I'm trying with TFTrainer now ``` # Preprocess train_dataset, val_dataset = self._preprocess(df, fit=True, retrain=False) # fit model training_args = TFTrainingArguments( output_dir='./results', # output directory num_train_epochs=3, # total number of training epochs per_device_train_batch_size=1, # batch size per device during training per_device_eval_batch_size=1, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=10, ) with training_args.strategy.scope(): self.clf = TFAutoModelForSequenceClassification.from_pretrained(self.params['transformer_model'], num_labels=self.n_label) self.trainer = TFTrainer( model=self.clf, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) self.trainer.train() ``` I got this error: ``` TypeError Traceback (most recent call last) <ipython-input-7-6e1fecee0ced> in <module> 1 st = time.time() ----> 2 model.fit(df) 3 print('time', time.time()-st) ~/ticket-analysis-releasev3/ticket-analysis/src/model/xlnet/transformers_trainer.py in fit(self, df) 176 ) 177 --> 178 self.trainer.train() 179 180 def predict(self, df=None): ~/transformers/src/transformers/trainer_tf.py in train(self) 455 Train method to train the model. 456 """ --> 457 train_ds = self.get_train_tfdataset() 458 459 if self.args.debug: ~/transformers/src/transformers/trainer_tf.py in get_train_tfdataset(self) 136 137 self.total_train_batch_size = self.args.train_batch_size * self.args.gradient_accumulation_steps --> 138 self.num_train_examples = self.train_dataset.cardinality(self.train_dataset).numpy() 139 140 if self.num_train_examples < 0: TypeError: cardinality() takes 1 positional argument but 2 were given ``` It seems that train_dataset is not correct. train_dataset and val_dataset are made as before ``` x_train, x_val, y_train, y_val = train_test_split(x, y, test_size=self.params['validation_split']) # train tokenized_inputs = xlnet_tokenizer(x_train.values.tolist(), padding=True, max_length=self.params['MAX_LENGTH'], truncation=True) numpy_inputs = {x:np.array(tokenized_inputs[x]) for x in tokenized_inputs.keys()} train_dataset = tf.data.Dataset.from_tensor_slices((numpy_inputs, y_train)).batch(1) # val tokenized_inputs = xlnet_tokenizer(x_val.values.tolist(), padding=True, max_length=self.params['MAX_LENGTH'], truncation=True) numpy_inputs = {x:np.array(tokenized_inputs[x]) for x in tokenized_inputs.keys()} val_dataset = tf.data.Dataset.from_tensor_slices((numpy_inputs, y_val)).batch(1) ``` and have shape `dataset <BatchDataset shapes: ({input_ids: (None, 329), token_type_ids: (None, 329), attention_mask: (None, 329)}, (None,)), types: ({input_ids: tf.int32, token_type_ids: tf.int32, attention_mask: tf.int32}, tf.int16)>` <|||||>Can you use transformers==4.2.0 and test?<|||||>I'm using 4.2.0dev0 obtained from pip install -e . in the github repo. Should I do pip install transformers==4.2.0 instead?<|||||>I'm getting this error with transformers==4.2.0 ``` INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1') All model checkpoint layers were used when initializing TFBertForSequenceClassification. Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. WARNING:tensorflow:From /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Iterator.get_next_as_optional()` instead. WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fd43ea31d00>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fd43ea31d00>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. INFO:tensorflow:Error reported to Coordinator: in user code: /opt/conda/lib/python3.7/site-packages/transformers/trainer_tf.py:633 apply_gradients * gradients = self.training_step(features, labels, nb_instances_in_global_batch) /opt/conda/lib/python3.7/site-packages/transformers/trainer_tf.py:616 training_step * per_example_loss, _ = self.run_model(features, labels, True) /opt/conda/lib/python3.7/site-packages/transformers/trainer_tf.py:719 run_model * outputs = self.model(features, labels=labels, training=training)[:2] /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:1421 call * outputs = self.bert( /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:654 call * embedding_output = self.embeddings( /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:192 call * return self._embedding(input_ids, position_ids, token_type_ids, inputs_embeds, training=training) /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:221 _embedding * embeddings = self.LayerNorm(embeddings) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py:985 __call__ ** outputs = call_fn(inputs, *args, **kwargs) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py:1205 call scale, offset = _broadcast(self.gamma), _broadcast(self.beta) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py:1192 _broadcast return array_ops.reshape(v, broadcast_shape) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper return target(*args, **kwargs) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:195 reshape result = gen_array_ops.reshape(tensor, shape, name) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py:8234 reshape "Reshape", tensor=tensor, shape=shape, name=name) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:744 _apply_op_helper attrs=attr_protos, op_def=op_def) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py:593 _create_op_internal compute_device) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:3485 _create_op_internal op_def=op_def) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:1975 __init__ control_input_ops, op_def) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:1815 _create_c_op raise ValueError(str(e)) ValueError: Cannot reshape a tensor with 768 elements to shape [1,1,329,1] (329 elements) for '{{node tf_bert_for_sequence_classification/bert/embeddings/LayerNorm/Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](tf_bert_for_sequence_classification/bert/embeddings/LayerNorm/Reshape/ReadVariableOp, tf_bert_for_sequence_classification/bert/embeddings/LayerNorm/Reshape/shape)' with input shapes: [768], [4] and with input tensors computed as partial shapes: input[1] = [1,1,329,1]. Traceback (most recent call last): File "/home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/training/coordinator.py", line 297, in stop_on_exception yield File "/home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/distribute/mirrored_run.py", line 323, in run self.main_result = self.main_fn(*self.main_args, **self.main_kwargs) File "/home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/autograph/impl/api.py", line 258, in wrapper raise e.ag_error_metadata.to_exception(e) ValueError: in user code: /opt/conda/lib/python3.7/site-packages/transformers/trainer_tf.py:633 apply_gradients * gradients = self.training_step(features, labels, nb_instances_in_global_batch) /opt/conda/lib/python3.7/site-packages/transformers/trainer_tf.py:616 training_step * per_example_loss, _ = self.run_model(features, labels, True) /opt/conda/lib/python3.7/site-packages/transformers/trainer_tf.py:719 run_model * outputs = self.model(features, labels=labels, training=training)[:2] /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:1421 call * outputs = self.bert( /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:654 call * embedding_output = self.embeddings( /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:192 call * return self._embedding(input_ids, position_ids, token_type_ids, inputs_embeds, training=training) /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:221 _embedding * embeddings = self.LayerNorm(embeddings) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py:985 __call__ ** outputs = call_fn(inputs, *args, **kwargs) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py:1205 call scale, offset = _broadcast(self.gamma), _broadcast(self.beta) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py:1192 _broadcast return array_ops.reshape(v, broadcast_shape) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper return target(*args, **kwargs) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:195 reshape result = gen_array_ops.reshape(tensor, shape, name) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py:8234 reshape "Reshape", tensor=tensor, shape=shape, name=name) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:744 _apply_op_helper attrs=attr_protos, op_def=op_def) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py:593 _create_op_internal compute_device) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:3485 _create_op_internal op_def=op_def) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:1975 __init__ control_input_ops, op_def) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:1815 _create_c_op raise ValueError(str(e)) ValueError: Cannot reshape a tensor with 768 elements to shape [1,1,329,1] (329 elements) for '{{node tf_bert_for_sequence_classification/bert/embeddings/LayerNorm/Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](tf_bert_for_sequence_classification/bert/embeddings/LayerNorm/Reshape/ReadVariableOp, tf_bert_for_sequence_classification/bert/embeddings/LayerNorm/Reshape/shape)' with input shapes: [768], [4] and with input tensors computed as partial shapes: input[1] = [1,1,329,1]. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-7-6e1fecee0ced> in <module> 1 st = time.time() ----> 2 model.fit(df) 3 print('time', time.time()-st) ~/ticket-analysis-releasev3/ticket-analysis/src/model/xlnet/transformers_trainer.py in fit(self, df) 176 ) 177 --> 178 self.trainer.train() 179 180 def predict(self, df=None): /opt/conda/lib/python3.7/site-packages/transformers/trainer_tf.py in train(self) 547 continue 548 --> 549 self.distributed_training_steps(batch) 550 551 self.global_step = iterations.numpy() ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 778 else: 779 compiler = "nonXla" --> 780 result = self._call(*args, **kwds) 781 782 new_tracing_count = self._get_tracing_count() ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 821 # This is the first call of __call__, so we have to initialize. 822 initializers = [] --> 823 self._initialize(args, kwds, add_initializers_to=initializers) 824 finally: 825 # At this point we know that the initialization is complete (or less ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 695 self._concrete_stateful_fn = ( 696 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access --> 697 *args, **kwds)) 698 699 def invalid_creator_scope(*unused_args, **unused_kwds): ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 2853 args, kwargs = None, None 2854 with self._lock: -> 2855 graph_function, _, _ = self._maybe_define_function(args, kwargs) 2856 return graph_function 2857 ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 3211 3212 self._function_cache.missed.add(call_context_key) -> 3213 graph_function = self._create_graph_function(args, kwargs) 3214 self._function_cache.primary[cache_key] = graph_function 3215 return graph_function, args, kwargs ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 3073 arg_names=arg_names, 3074 override_flat_arg_shapes=override_flat_arg_shapes, -> 3075 capture_by_value=self._capture_by_value), 3076 self._function_attributes, 3077 function_spec=self.function_spec, ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 984 _, original_func = tf_decorator.unwrap(python_func) 985 --> 986 func_outputs = python_func(*func_args, **func_kwargs) 987 988 # invariant: `func_outputs` contains only Tensors, CompositeTensors, ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds) 598 # __wrapped__ allows AutoGraph to swap in a converted function. We give 599 # the function a weak reference to itself to avoid a reference cycle. --> 600 return weak_wrapped_fn().__wrapped__(*args, **kwds) 601 weak_wrapped_fn = weakref.ref(wrapped_fn) 602 ~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in bound_method_wrapper(*args, **kwargs) 3733 # However, the replacer is still responsible for attaching self properly. 3734 # TODO(mdan): Is it possible to do it here instead? -> 3735 return wrapped_fn(*args, **kwargs) 3736 weak_bound_method_wrapper = weakref.ref(bound_method_wrapper) 3737 ~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 971 except Exception as e: # pylint:disable=broad-except 972 if hasattr(e, "ag_error_metadata"): --> 973 raise e.ag_error_metadata.to_exception(e) 974 else: 975 raise ValueError: in user code: /opt/conda/lib/python3.7/site-packages/transformers/trainer_tf.py:672 distributed_training_steps * self.args.strategy.run(self.apply_gradients, inputs) /opt/conda/lib/python3.7/site-packages/transformers/trainer_tf.py:633 apply_gradients * gradients = self.training_step(features, labels, nb_instances_in_global_batch) /opt/conda/lib/python3.7/site-packages/transformers/trainer_tf.py:616 training_step * per_example_loss, _ = self.run_model(features, labels, True) /opt/conda/lib/python3.7/site-packages/transformers/trainer_tf.py:719 run_model * outputs = self.model(features, labels=labels, training=training)[:2] /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:1421 call * outputs = self.bert( /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:654 call * embedding_output = self.embeddings( /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:192 call * return self._embedding(input_ids, position_ids, token_type_ids, inputs_embeds, training=training) /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:221 _embedding * embeddings = self.LayerNorm(embeddings) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py:985 __call__ ** outputs = call_fn(inputs, *args, **kwargs) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py:1205 call scale, offset = _broadcast(self.gamma), _broadcast(self.beta) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py:1192 _broadcast return array_ops.reshape(v, broadcast_shape) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper return target(*args, **kwargs) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:195 reshape result = gen_array_ops.reshape(tensor, shape, name) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py:8234 reshape "Reshape", tensor=tensor, shape=shape, name=name) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:744 _apply_op_helper attrs=attr_protos, op_def=op_def) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py:593 _create_op_internal compute_device) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:3485 _create_op_internal op_def=op_def) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:1975 __init__ control_input_ops, op_def) /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:1815 _create_c_op raise ValueError(str(e)) ValueError: Cannot reshape a tensor with 768 elements to shape [1,1,329,1] (329 elements) for '{{node tf_bert_for_sequence_classification/bert/embeddings/LayerNorm/Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](tf_bert_for_sequence_classification/bert/embeddings/LayerNorm/Reshape/ReadVariableOp, tf_bert_for_sequence_classification/bert/embeddings/LayerNorm/Reshape/shape)' with input shapes: [768], [4] and with input tensors computed as partial shapes: input[1] = [1,1,329,1]. ```<|||||>Hello! This seems a different issue for a different model than XLNet, can you give a bit more context please? Such as how to properly reproduce the error you get? Thanks :)<|||||>Sorry, I have been very busy with other stuffs so I put that on pause. I tried 'bert-base-uncased' rather than XLNet last time and used TFTrainer rather than .fit from tenforflow because the model didn't seem to learn anything using only tensorflow. Do you know if TFtrainer can help? And if yes, how to use it correctly?<|||||>You have multiple examples in the `examples` folder https://github.com/huggingface/transformers/tree/master/examples<|||||>Do I need to add positional encoding? Or is that done automatically by the tokenizer or the model? Are there different positional encoding I could use?<|||||>You are not forced to provide positional ids, the model creates them for you by default. If you want to use your own ones you have to provide them to the model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,405
closed
Retrieval Collapse when fine-tuning RAG
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: latest production version - Platform: - Python version: 3.8 - PyTorch version (GPU?): 1.7 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. RAG: @patrickvonplaten, @lhoestq --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Fine-tune RAG on FEVER dataset 2. Notice that the same documents are retrieved every time We are trying to fine-tune RAG on the FEVER dataset. We follow the steps in the paper (we think) to a 'T' (RAG plus BART classifier head). However, when we fine-tune, retrieval "collapses" (a term used in the paper) so that all queries retrieve the same irrelevant documents. As a sanity check, we fine-tuned with a frozen retriever, and achieved similar results (72%) to what the paper achieves with frozen retriever. Thus, it appears that perhaps there is a bug in HF's implementation of the retriever (and its gradients) that is causing this. Alternatively, perhaps there is an obvious mistake in our config of the retriever. Do you have any insights into this? Thanks!
01-04-2021 21:14:38
01-04-2021 21:14:38
Maybe @patrickvonplaten or @lhoestq can chime in here.<|||||>It may very well be that this feature is not implemented. @ola13, @lhoestq do you have more insight here maybe?<|||||>The gradient does propagate to the question encoder. What configuration did you use ?<|||||>We used the default configuration and 'compressed' DPR. We are indeed seeing that the gradient is propagating to the question encoder; before training starts, a retriever retrieves related articles, but after training, the retriever universally retrieves the same few documents regardless of the query. That shows that the gradients are propagating, but not well.<|||||>Following up on this. Any thoughts? We are curious whether there is an issue with the implementation of DPR training in HF<|||||>Hi @JamesDeAntonis! Retrieval collapse is a problem we encountered in some setups and not necessarily caused by a bug in the retriever - basically what it means is that the passages retrieved at the beginning of the training are not useful enough so the models learns to ignore them. We experienced collapse when training a RAG-Sequence model for FEVER, but we were successful with RAG-Token and RAG-Classifier. An option to move forward here could be: - try training a RAG-Token generative model (it'd be generating the labels) - share the classification code, maybe there's some issue there? Are you performing marginalization on top of the BART classification head logprobs?<|||||>Thanks for the response! Good to hear that you were able to train successfully. When you trained, did you use the two-label or three-label dataset? (we are currently using the three-label) I'm curious whether the inconclusive samples are contributing to the collapse. * We are using the final hidden state of RAG-token as input into a classification head, and the model properly trains with the generator and classifier heads unfrozen (just the retriever is frozen in this case). This gets to 72% accuracy, same as the paper. I think this implies that the generator and classifier head are configured properly. * We are indeed marginalizing on top of the BART classification head logprobs<|||||>Here is our classification head, mostly taken from HF: ```{python} class BartClassificationHead(Module): """Head for sentence-level classification tasks.""" def __init__( self, input_dim: int, inner_dim: int, num_classes: int, pooler_dropout: float, **config_kwargs ): super().__init__(**config_kwargs) self.dense = Linear(input_dim, inner_dim) self.dropout = Dropout(p=pooler_dropout) self.out_proj = Linear(inner_dim, num_classes) def forward(self, hidden_states: torch.Tensor): hidden_states = self.dropout(hidden_states) hidden_states = self.dense(hidden_states) hidden_states = torch.tanh(hidden_states) hidden_states = self.dropout(hidden_states) hidden_states = self.out_proj(hidden_states) return hidden_states ```<|||||>..and here is the high-level model code: ```{python} # input_ids shape = (batch_size, 512) outputs = super().forward(input_ids=input_ids, attention_mask=attention_mask, **rag_kwargs) ### the following code is inspired by BartForSequenceClassification forward method # best practice for bart classification is to use the last hidden state # hidden.shape=(batch_size * n_documents, 300, 1024) hidden = outputs.generator_dec_hidden_states[-1] # last hidden state; #print (hidden) # eos_mask.shape = (batch_size * n_documents, 300) eos_mask = outputs.context_input_ids.eq(self.rag.generator.config.eos_token_id) if len(torch.unique(eos_mask.sum(1))) > 1: raise ValueError("All examples must have the same number of <eos> tokens.") # pass along the hidden state at the eos token # (batch_size * n_documents, 1024) sentence_representation = hidden[eos_mask, :].view(hidden.size(0), -1, hidden.size(-1))[:, -1, :] # (batch_size * n_documents, 1, 3) document_level_logits = self.classification_head(sentence_representation) # finally, marginalize across all the retrieved documents # (batch_size, 1, 3) logits = self.marginalize(document_level_logits, outputs.doc_scores) # (batch_size, 3) logits = logits.squeeze(1) ```<|||||>We were able to train RAG-Token and RAG-Classifier successfully both on 2-way and the 3-way variant of FEVER. One important thing to note though is that those were on our internal `fairseq` implementation. > try training a `RagToken` generative model (it'd be generating the labels) What I meant when suggesting to use `RagToken` would be to use it as-is, without a classification head - it might seem counterintuitive but the generative model is actually able to learn to generate the labels. As for the classification implementation - what you're proposing is quite different from our implementation in `fairseq`. What happens currently in your implementation is that you marginalize twice - once inside the forward pass on `RagToken`, and then again after applying your classification head. What we do instead is the following: 1) take the generator hidden states (not marginalized) 2) apply BART-like classification head on top of that 3) marginalize So basically - you don't want to just add a `BartClassificationHead` on top of `RagToken` hidden states. You want to implement something similar to [`BartForSequenceClassification`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_bart.py#L1254-L1346) - a `RagForSequenceClassification` of sorts, doing what I outlined above - if you're interested in implementing that I think it'd be a great contribution to the repo, cc @patrickvonplaten :) Let me know if this makes sense! <|||||>Thanks for the advice! I have a few questions (1) what makes you think we marginalize twice? `do_marginalize=False` in `RagToken` by default. (2) What's the difference between a `BartClassificationHead` on top of `Bart`, and `BartClassificationHead` on top of `RagToken`? Isn't the generator of `RagToken` simply `Bart` already, so the final hidden state of `RagToken` == final hidden state of `Bart`? (3) Did you use 'adam' as your optimizer<|||||>And yes, I would be delighted to contribute it to the repo :)<|||||>Hi there! I'm a colleague of @JamesDeAntonis and I just wanted to chime in and clarify that we are using RagTokenForGeneration only as a neat wrapper to have a `self.rag` variable if we ever needed it. During training we take `outputs.generator_dec_hidden_states[-1]` as posted in the code above. Then we proceed with the 3 steps you listed above, essentially ending up with a SequenceClassification head from the rag generator hidden state outputs. We don't interact with the generation aspect at all, as you correctly identified.<|||||>> (1) what makes you think we marginalize twice? do_marginalize=False in RagToken by default. Hey James and @suriyakode, I didn't realize that was the default configuration, in such case indeed your implementation should be equivalent to what I was suggesting. And yes, we did use Adam as optimizer. In such case I don't see anything obvious unfortunately. What accuracy do you get with the collapse?<|||||>Bummer :/ With frozen retriever, we achieved 72% accuracy, then unfroze the retriever and the accuracy fell to 68% after collapse To be clear, you used `RagToken`, aka the current `RagTokenForGeneration` object that was implemented in HF? My original thought was that something could be wrong with the gradients or something in this specific implementation<|||||>@JamesDeAntonis all of my FEVER experiments were done on `fairseq`, but I have been able to replicate RAG paper results training HF `RagToken` models on Natural Questions, which gives me some level of confidence in the implementation.<|||||>Ok, thanks. What was your learning rate?<|||||>It was 1e-05 for training the classifier.<|||||>I also noticed an issue with the finetuning script. I ended up printing `doc_scores` while using the RAG finetuning scripts and saw that there was no gradient. Is there no gradient passed to the question encoder from the generator?<|||||>The `doc_scores` is supposed to have gradients. And the gradients are propagated to the weights of the RAG question encoder.<|||||>I cloned the transformers repo and pip installed transformers using the repo cloned to verify. I printed `doc_scores` in `RagModel` and got the following for what `doc_scores` was: ![Screen Shot 2021-01-29 at 3 20 34 PM](https://user-images.githubusercontent.com/18504534/106323530-69eaa300-622c-11eb-831c-19a2c5ba7f87.png) I don't see gradients in the tensor. <|||||>Is there a fix to the `doc_scores` gradient?<|||||>It looks like there's no grad_fn on your `doc_scores`. Are the weights of the question encoder updated during finetuning ?<|||||>Nope the weights aren't updated.<|||||>Though should there even be a `grad_fn` when it's running "Validation sanity check" (as pictured above)?<|||||>Good catch @dblakely indeed during validation it makes sense to not have any gradient<|||||>Thanks for the clarification! I continued printing after and got a `grad_fn` in the tensor.<|||||>@ola13 is there any code that we can see regarding the internal `fairseq` implementation of RAG and the training you did with it? I don't think there's any RAG in the public `fairseq` repo, but would be useful for me to be able to compare the two implementations<|||||>@ola13 I have a couple of questions on this (1) how many docs did you retrieve? (2) regarding the `RagForFeverClassification` idea, what is meant by "[we] first regenerate the claim" in section C of the RAG paper's appendix? Does that mean that we should only provide tokens from the claim as `decoder_input_ids` in the generator? Does it mean we should run multiple passes of the generator? Curious what the correct interpretation is<|||||>Hi @JamesDeAntonis - sorry just noticed your previous commend, the `fairseq` implementation is not available publicly. Regarding your latest questions: 1) we used 5 docs at training time and tried evaluation with up to 50 docs 2) This idea is adapted from BART - https://arxiv.org/pdf/1910.13461.pdf - section 3.3 or Figure 3.a - in case of RAG, we don't copy all contextualized input, just the claim tokens. Since BART model will copy all input to `decoder_input_ids` if you just leave it at `None` ([here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_bart.py#L1135-L1140)) you can try adding extra logic to only pass claim tokens to the BART decoder with RAG (that may mean explicitly setting `decoder_input_ids` like you mention). This does not require multiple runs of the generator.<|||||>@JamesDeAntonis if you remember, could you please let me know what was the value of starting value of training loss, when using RAG -token or sequence models.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@JamesDeAntonis Hello, do you have address this problem, or any thoughts? I think I have encounter a similar problem, so I would like to know how you deal with it afterwards?
transformers
9,404
closed
Add head_mask/decoder_head_mask for BART
Description: This PR adds `head_mask` and `decoder_head_mask` for BART PyTorch implementation according to BERT implementation. Motivation: According to HuggingFace's websites "There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call “BERTology”)." This PR enables to mask attention heads in encoder and decoder models exactly like for BERT. This PR thus creates an opportunity to study the importance of attention heads in encoder-decoder BERT-like model. Reviewer: @patrickvonplaten
01-04-2021 20:24:25
01-04-2021 20:24:25
Dear @patrickvonplaten and the rest of HuggingFace group. I implemented the concept of `head_mask` from BERT into BART so that the internal of decoder-encoder-like models can be studied as well. However, as this is my very first attempt to contribute to such a large-scale open-source project, I have been a bit struggling to pass the tests. Would you be, please, able to guide me what everything needs to be done in this case in order to achieve a valid pull request? Thank you very much for all your time in advance. I really do appreciate it.<|||||>Hi @stancld - thanks a lot for pinging me! I'm happy to help you here :-) I think you're PR is a nice addition. Sadly, we did many changes to Bart recently (see https://github.com/huggingface/transformers/pull/9343) so that you'll probably have to rebase your PR to the current version of master. <|||||>After that I'm happy to get the tests passing together!<|||||>Hi @patrickvonplaten, the model should be rebased according to the commit #9343 at this moment. :) I'll be more than happy to finish this PR with you. Thanks a lot in advance :) <|||||>@stancld, please do let me know if you're stuck and need help or if your PR is ready for review, just ping me here :-)<|||||>Hi @patrickvonplaten, I would like to bring an update after the weekend off. First of all, I would like to apologise for a bit of messy PR, as I was initially struggling with on my local (I'll do better next time). Regarding this PR: To pass all the tests, `head_mask` and `decoder_head_mask` is now implemented for the following PyTorch BART-based models: - **BART**, - **MBart**, - **Blenderbot**, - **BlenderbotSmall**, - **Marian**, - **Pegasus**. Besides, I think some additional tests for head_mask for these models might be desired to implement, but I leave this decision up to you. In any case, please, let me know what it needs to do to complete this PR. <|||||>@patrickvonplaten I think this PR is ready for review. I've currently resolved one conflict arose last night after a commit to `master` and now I've been tracking changes on my local and everything still seems to be working.<|||||>Hey @stancld, This is a super nice PR. It's very clean and that without any help - awesome! I think there are 3 things we should change/add: 1) I think we should change the order of the forward args of all `...Model` and `...ForConditionalGeneration` as explained above. This a) means that there is no breaking change in the way Bart is used with torchscript and it's the better option IMO as well since the first 4 args should always be `input_ids, attention_mask, decoder_input_ids, decoder_attention_mask` for EncDec models 2) Let's try to remove all "hard-coded" model names in the common tests. I've commented above. We don't really need to test torchscript with head_mask and for the signature it'd be better to change it according to 1) 3) It would be awesome if you could a `if model.config.is_encoder_decoder` part to the `test_headmasking` test in `test_modeling_common.py` that tests headmasking correctly for Seq2Seq models. To enable this test for all Bart-like models you'll have to set `test_head_masking` to True in `BartModelTest` and others. One thing we'll have to adapt in the test is we should change the line: ``` attentions = outputs[-1] ``` to ```python attentions = outputs.attetions ``` for the `model.config.is_encoder_decoder is False` case and to ```python encoder_attentions = outputs.encoder_attentions decoder_attentions = outputs.decoder_attentions ``` for the other case. I can also help you with 3) in case you're stuck. Really impressed by how clean the PR is! Think there is not much left to do. 1) and 2) are very easy changes and 3) will require a bit more time, but should be fine as well.<|||||>Hey @patrickvonplaten, thanks a lot for your thorough feedback. I believe to come back later today with a new commit fixing the listed issues :)<|||||>Hey @patrickvonplaten, this PR is again ready for review after making some changes according to your notes above. The one problem at this moment is that BART-like models do not satisfy one condition in `test_headmasking`: ``` self.assertNotEqual(attentions[1][..., 0, :, :].flatten().sum().item(), 0.0). ``` I am not sure whether the formula for masking attention heads (in BART-like models) is implemented correctly. Now, if `head_mask` in the test case is specified as ``` head_mask = torch.ones( self.model_tester.num_hidden_layers, self.model_tester.num_attention_heads, device=torch_device, ) head_mask[0, 0] = 0 head_mask[-1, :-1] = 0 ``` then `outputs.encoder_attentions[1][..., :, :, :]` or `outputs.decoder_attentions[1][..., :, :, :]` equals tensor of `0.0` for all examples over all heads but the last one. This is not the case, however, for **non**-encoder-decoder models with `attentions[1][..., :, :, :]`. Do you have any idea where the problem can be? Anyway, I hope we will solve this issue and merge this PR. :) <|||||>I made some mistakes during updating my branch, which resulted in the problem with tracking files not edited actually by myself. I find this quite inconvenient and I have failed to repair this issue so far. Therefore, I've created a new (clean) branch, which might be found here https://github.com/stancld/transformers/tree/head_mask_for_bart_new. If you, @patrickvonplaten, were okay with that, I would close this PR (after resolving those rather minor issues raised in our discussion above) and create a new one from the new branch referenced above to make everything nice and clean before an eventual merge. <|||||>@stancld absolutely! Feel free to close this PR and open a new one :-) This happens to me all the time as well <|||||>We can just link this closed PR to the new PR to have a reference to the discussion we had<|||||>@patrickvonplaten - Great, you can find a newly open PR at #9569 :)
transformers
9,403
closed
added head_mask/decoder_head_mask for BART
Description: This PR adds `head_mask` and `decoder_head_mask` for BART PyTorch implementation according to BERT implementation. Motivation: According to HuggingFace's websites "There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call “BERTology”)." This PR enables to mask attention heads in encoder and decoder models exactly like for BERT. This PR thus creates an opportunity to study the importance of attention heads in encoder-decoder BERT-like model. Reviewer: @patrickvonplaten
01-04-2021 20:15:57
01-04-2021 20:15:57
transformers
9,402
closed
Bump notebook from 6.1.4 to 6.1.5 in /examples/research_projects/lxmert
Bumps [notebook](https://github.com/jupyter/jupyterhub) from 6.1.4 to 6.1.5. <details> <summary>Commits</summary> <ul> <li>See full diff in <a href="https://github.com/jupyter/jupyterhub/commits">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=notebook&package-manager=pip&previous-version=6.1.4&new-version=6.1.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
01-04-2021 14:59:44
01-04-2021 14:59:44
Thanks dependabot!
transformers
9,401
closed
Put back LXMert example
# What does this PR do? During the example reorganization, LXMert seems to have slipped into the cracks and got accidentally deleted. This PR puts it back. Fixes #9309
01-04-2021 14:28:17
01-04-2021 14:28:17
Anyway, I doubt Lxmert is currently not supported for customized/ personal datasets (nlp+images, images are harder to prepare). See issues relevant to feature extraction in [Lxmert](https://github.com/airsplay/lxmert#faster-r-cnn-feature-extraction), for example, [issue#79](https://github.com/airsplay/lxmert/issues/79), [issue#86](https://github.com/airsplay/lxmert/issues/86).
transformers
9,400
closed
Generate Function - Manual decoder_input_ids Error (Bart, Pegasus)
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Google Colab - Python version: 3.6.9 ### Who can help @patrickvonplaten ## To reproduce Link to the forum discussion: [https://discuss.huggingface.co/t/rewriting-generate-function-for-manual-decoder-input/3034/3](https://discuss.huggingface.co/t/rewriting-generate-function-for-manual-decoder-input/3034/3) Steps to reproduce the behavior: ```python !pip install transformers==4.1.1 !pip install sentencepiece from transformers import BartTokenizer, BartForConditionalGeneration tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model = BartForConditionalGeneration.from_pretrained('facebook/bart-base') # OR ''' from transformers import PegasusTokenizer, PegasusForConditionalGeneration tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large') model = PegasusForConditionalGeneration.from_pretrained('google/pegasus-large') ''' text = "this is a sample text" input_ids = tokenizer(text, return_tensors="pt").input_ids decoder_input_ids = tokenizer("<s> Anatomy is", return_tensors="pt", add_special_tokens=False).input_ids output = model.generate(input_ids, decoder_input_ids=decoder_input_ids, num_beams=4, num_return_sequences=4) print("With decoder_input_ids num_beams=4", tokenizer.batch_decode(output, skip_special_tokens=True)) output = model.generate(input_ids, num_beams=4, num_return_sequences=4) print("Without decoder_input_ids num_beams=4", tokenizer.batch_decode(output, skip_special_tokens=True)) ``` Error: ``` TypeError Traceback (most recent call last) <ipython-input-38-271e60997201> in <module>() 2 decoder_input_ids = tokenizer("<s> Anatomy is", return_tensors="pt", add_special_tokens=False).input_ids 3 ----> 4 output = model.generate(input_ids, decoder_input_ids=decoder_input_ids, num_beams=4, num_return_sequences=4) 5 6 print("With decoder_input_ids num_beams=4", tokenizer.batch_decode(output, skip_special_tokens=True)) 2 frames /usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 24 def decorate_context(*args, **kwargs): 25 with self.__class__(): ---> 26 return func(*args, **kwargs) 27 return cast(F, decorate_context) 28 /usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, **model_kwargs) 610 pad_token_id=pad_token_id, 611 eos_token_id=eos_token_id, --> 612 **model_kwargs, 613 ) 614 /usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py in beam_search(self, input_ids, beam_scorer, logits_processor, max_length, pad_token_id, eos_token_id, **model_kwargs) 1041 1042 while cur_len < max_length: -> 1043 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) 1044 1045 outputs = self(**model_inputs, return_dict=True) TypeError: prepare_inputs_for_generation() got multiple values for argument 'decoder_input_ids' ```
01-04-2021 13:34:05
01-04-2021 13:34:05
transformers
9,399
closed
How to use Longformer for summarization
Hi - Do you have a sample code on how to use Longformer for summarization tasks.
01-04-2021 13:27:21
01-04-2021 13:27:21
Perhaps this will help: https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16<|||||>`LongformerEncoderDecoder` will be added to the lib once #9278 is merged! It can be used for summarization or any other seq2seq task.<|||||>@patil-suraj, @christianversloot I tried this model [longformer2roberta](https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16). It is actually giving better summary than Pegasus (reddit-tifu). I will be waiting for LongformerEncoderDecoder to be added to the library. I just have one question - how many input tokens longformer2roberta model supports? I believe it's 2048. Could you please confirm,<|||||>`longformer2roberta` should support 4096 tokens. And LED is now on master!<|||||>@patil-suraj Awesome! I am trying LED but getting below error. Could you please take a look? ``` from transformers import LEDForConditionalGeneration, LEDTokenizer model_name = 'allenai/led-base-16384' tokenizer = LEDTokenizer.from_pretrained(model_name) model = LEDTokenizer.from_pretrained(model_name) batch = tokenizer.prepare_seq2seq_batch(article, truncation=True, padding='longest', return_tensors="pt").to(torch_device) translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-33-d4827d7b5770> in <module>() 5 model = LEDTokenizer.from_pretrained(model_name) 6 batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors="pt").to(torch_device) ----> 7 translated = model.generate(**batch) 8 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) AttributeError: 'LEDTokenizer' object has no attribute 'generate'<|||||>```python model = LEDTokenizer.from_pretrained(model_name) ``` here you are assigning tokenizer as the model, it should be ```python model = LEDForConditionalGeneration.from_pretrained(model_name) ```<|||||>@patil-suraj Thank you! My bad. I have corrected the typo. Below code seems to be working fine. ``` from transformers import LEDForConditionalGeneration, LEDTokenizer model_name = 'allenai/led-base-16384' tokenizer = LEDTokenizer.from_pretrained(model_name) model = LEDForConditionalGeneration.from_pretrained(model_name) input_ids = tokenizer(src_text, return_tensors="pt").input_ids output_ids = model.generate(input_ids) output = tokenizer.decode(output_ids[0], skip_special_tokens=True) ``` I am getting very short summary of just 20 tokens from the above code. So I was looking for the default values for below parameters for LED model. I could not find it in [config.json](https://huggingface.co/allenai/led-base-16384/blob/main/config.json) file. For the **longformer2roberta** model I found these values in [config.json](https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16/blob/main/config.json). Could you please let me know where can I find these values for LED model. - num_beams - no_repeat_ngram_size, - early_stopping, - length_penalty, - min_length, - max_length<|||||>@patil-suraj Any inputs/suggestions here?<|||||>Hi, I have a question about the `LEDForConditionalGeneration` forward args. The `decoder_input_ids` has a comment that `decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) – Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids to the right, following the paper.`. Form the forward method in `LEDForConditionalGeneration`, i can see that when not assigning the `decoder_input_ids` in the forward method of `LEDForConditionalGeneration` object , the `decoder_input_ids` will be generated by [shifting the `labels` value one token to right in the forward method](https://github.com/huggingface/transformers/blob/17b6e0d474b797cdddf5225b0f51bf0e928091b9/src/transformers/models/led/modeling_led.py#L2337). So my question is if i want to explictly pass the `decoder_input_ids` to the forward method, do i need to explictly shift it one token as the [code](https://github.com/huggingface/transformers/blob/17b6e0d474b797cdddf5225b0f51bf0e928091b9/src/transformers/models/led/modeling_led.py#L2337) shows before the forward pass? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,398
closed
trainer.predict() returns different values from model.logits
Hi dear authors! When I was using my **fine-tuned bert model** to do the sequence classification task, I found the values returned by `trainer.predict(test_dataset)` were very different from what I got from `model(**test_encodings)`. I did not find messages describing what the `predictions` actually are in the documents, so I'm not seeing what `trainer.predict()` returns. Could you please help me explain a bit more? Here are some of my codes - predicts with `model(**test_encodings)` ```python def _predict_with_np(text_a, text_b, tokenizer, model): scores = [0, 1] encoded_input = tokenizer((text_a, text_b), truncation=True, padding=True, return_tensors="pt") output = model(**encoded_input) logit = output.logits[0] softmax_score = F.softmax(logit,dim=-1) score = scores[torch.argmax(softmax_score)] return score, logit, softmax_score def predict_with_np(params, printing=False, NUM=0): texts = load_data() print("===> Loading fine-tuned model and tokenizer...") model = BertForSequenceClassification.from_pretrained(TUNED_MODEL) tokenizer = BertTokenizer.from_pretrained(TUNED_TOKENIZER) print("===> Classifying...") for i, text in enumerate(texts): if NUM > 0 and i > NUM: break text_a, text_b = text text_a = text_a.strip() text_b = text_b.strip() score, logit, softmax_score = _predict_with_np(text_a, text_b, tokenizer, model) ``` - predicts with `trainer.predict()` ```python def predict_with_hf(params): test_dataset = load_pt_data() print("===> Loading Model and Training Arguments...") model = BertForSequenceClassification.from_pretrained(TUNED_MODEL) training_args = TrainingArguments( run_name=params.run_name, disable_tqdm=True, fp16=params.fp16, gradient_accumulation_steps=params.gradient_accumulation_steps, do_train=False, do_eval=False, do_predict=True, output_dir=params.output_dir, ) print("===> Predicting...") trainer = Trainer( model=model, args=training_args, eval_dataset=test_dataset ) results = {} logger.info("*** Predict ***") result = trainer.predict(test_dataset) output_pred_file = os.path.join(training_args.output_dir, "pred_results.txt") with open(output_pred_file, "w") as writer: logger.info("***** Pred results *****") for pred in result.predictions: logger.info(" predictions = %s", pred) writer.write("predictions = %s\n" % pred) ``` With these two versions, I got outputs like this: ```json text_a: "Don't worry. I'll take care of it." text_b: "Why so long?" score: 0 logits: [0.3749077320098877, -0.15262120962142944] softmax: [0.6289066076278687, 0.37109342217445374] predictions: [-0.04395686 0.29134133] ``` I've read several lines of code inside `src/trainer.py` so I guess predictions are supposed to be logits. But actually, they are away different from the logits I have here. Am I calculating things in the wrong way, or are the predictions designed to be something else? Thanks for reading my long questions!
01-04-2021 13:13:36
01-04-2021 13:13:36
Hi, I have the same problem different results using `model()` vs `trainer.predict()`.<|||||>> Hi, > I have the same problem different results using `model()` vs `trainer.predict()`. Thanks for replying. I assumed this feature was made for other usages so I ended up using `model()`. Anyway, I would like to seek an answer for sure. Reading the new issue's template I guess @sgugger could help us here. I'm sorry to disturb you, could you please give some details on `Trainer.predict()` here?<|||||>I solved it by returning to 4.0.1, here both methods return the same results. But I still got a problem, before saving the model (so just at the end of the finetuning) with `TrainingArguments(..., load_best_model_at_end=True)` the `trainer.predict()` still differs from `model()`. But after reloading the model with `from_pretrained` with transformers==4.0.1 both methods are equal. So I guess the `trainer.predict()` does really load the best model at the end of the training.<|||||>I'm unsure of what the problem is since the code you indicate is not reproducible (what is `model`, `load_pt_data()` etc.). On my side, using an installation form source on current master, here is what I get. First I instantiate a model and tokenizer and preprocess some data (with padding to be able to batch): ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, Trainer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased") texts = ["Hello there!", "This is another text"] tokenized_texts = tokenizer(texts, padding=True) ``` Then I create a Dataset class to be able to feed my `tokenized_texts` to `Trainer`: ``` class SimpleDataset: def __init__(self, tokenized_texts): self.tokenized_texts = tokenized_texts def __len__(self): return len(self.tokenized_texts["input_ids"]) def __getitem__(self, idx): return {k: v[idx] for k, v in self.tokenized_texts.items()} test_dataset = SimpleDataset(tokenized_texts) ``` Then predicting through `Trainer` like this: ``` trainer = Trainer(model=model) predictions = trainer.predict(test_dataset) predictions.predictions ``` returns this: ``` array([[-0.68212456, 0.07081275], [-0.59134895, 0.16735002]], dtype=float32) ``` and predicting directly with the model: ``` import torch model.eval() pt_inputs = {k: torch.tensor(v).to(trainer.args.device) for k, v in tokenized_texts.items()} with torch.no_grad(): output = model(**pt_inputs) output.logits.cpu().numpy() ``` gives me the exact same result. Make sure that you preprocess your inputs the same way in both instances, and when using the model directly, that it is in evaluation mode.<|||||>> pt_inputs = {k: torch.tensor(v).to(trainer.args.device) for k, v in tokenized_texts.items()} > with torch.no_grad(): > output = model(**pt_inputs) > output.logits.cpu().numpy() Hi, thanks for your answers! I think the reason I'm having different results is I did not use `model.eval()` but I only had <1000 lines of test data to predict. Thank you so much! :)<|||||>> I'm unsure of what the problem is since the code you indicate is not reproducible (what is `model`, `load_pt_data()` etc.). On my side, using an installation form source on current master, here is what I get. First I instantiate a model and tokenizer and preprocess some data (with padding to be able to batch): > > ``` > from transformers import AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, Trainer > > tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") > model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased") > texts = ["Hello there!", "This is another text"] > tokenized_texts = tokenizer(texts, padding=True) > ``` > > Then I create a Dataset class to be able to feed my `tokenized_texts` to `Trainer`: > > ``` > class SimpleDataset: > def __init__(self, tokenized_texts): > self.tokenized_texts = tokenized_texts > > def __len__(self): > return len(self.tokenized_texts["input_ids"]) > > def __getitem__(self, idx): > return {k: v[idx] for k, v in self.tokenized_texts.items()} > > test_dataset = SimpleDataset(tokenized_texts) > ``` > > Then predicting through `Trainer` like this: > > ``` > trainer = Trainer(model=model) > predictions = trainer.predict(test_dataset) > predictions.predictions > ``` > > returns this: > > ``` > array([[-0.68212456, 0.07081275], > [-0.59134895, 0.16735002]], dtype=float32) > ``` > > and predicting directly with the model: > > ``` > import torch > > model.eval() > pt_inputs = {k: torch.tensor(v).to(trainer.args.device) for k, v in tokenized_texts.items()} > with torch.no_grad(): > output = model(**pt_inputs) > output.logits.cpu().numpy() > ``` > > gives me the exact same result. > > Make sure that you preprocess your inputs the same way in both instances, and when using the model directly, that it is in evaluation mode. I have a more question that how can I load the model without using "from_pretrained" ![image](https://user-images.githubusercontent.com/36092323/114656624-9d5fa880-9d18-11eb-9eb0-c33f3d175974.png) Because I have some custom for the the model, nn.Model, it does not inherent from "PreTrainedModel", so I can't load it using "from_pretrained"
transformers
9,397
closed
CUDA runtime error during benchmarking
Running `transformers/examples/benchmarking/run_benchmark.py` with any type of model, with multi-processing gives the following error: ``` 1 / 1 THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/THC/THCGeneral.cpp line=54 error=3 : initialization error cuda runtime error (3) : initialization error at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/THC/THCGeneral.cpp:54 cuda runtime error (3) : initialization error at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/THC/THCGeneral.cpp:54 Traceback (most recent call last): File "run_benchmark.py", line 47, in <module> main() File "run_benchmark.py", line 43, in main benchmark.run() File "/home/dock/.conda/envs/torch/lib/python3.7/site-packages/transformers/benchmark/benchmark_utils.py", line 709, in run memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length) ValueError: too many values to unpack (expected 2) ``` It looks like `self.inference_memory` function is returning the string `N/A`. Everything works fine when `no_multi_processing` option is selected. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Ubuntu 18.04.1 LTS - Python version: 3.7.9 - PyTorch version (GPU?): 1.2.0 with GPU support - Tensorflow version (GPU?): None - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: I guess so ### Who can help @patrickvonplaten <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): GPT2, DistilGPT2 The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below)
01-04-2021 11:19:37
01-04-2021 11:19:37
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,396
closed
run_glue.py with XLNet model on CoLA dataset reaches 0 accuracy
## Environment info - `transformers` version: 4.1.1 - Platform: Linux - Python version: 3.6.10 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @TevenLeScao ## Information Model I am using: XLNet The problem arises when using: * The official example scripts of `run_glue.py` The tasks I am working on is: * an official GLUE: CoLA ## To reproduce Steps to reproduce the behavior: I am using the "run_glue" cmd as described here: https://github.com/huggingface/transformers/tree/master/examples/text-classification `python run_glue.py --task_name cola --model_name_or_path xlnet-base-cased --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --do_predict --overwrite_output_dir` That's the results I get: ``` [p]$ cat res_cola_xlnet.txt eval_loss = 0.612945020198822 eval_matthews_correlation = 0.0 epoch = 3.0 ``` ## Expected behavior Results > 0
01-04-2021 10:05:38
01-04-2021 10:05:38
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>Hi, did you fix this issue? I also got this problem when using BART-large.
transformers
9,395
closed
wrong output for Bert-larged-uncased
I am running Bert for the pytorch version: from transformers import BertConfig, BertTokenizer, BertModel config_class, model_class, tokenizer_class = (BertConfig, BertModel, BertTokenizer) transformer_config = config_class.from_pretrained(pretrained_model + "/bert_config.json") tokenizer = tokenizer_class.from_pretrained(pretrained_model, do_lower_case = True) transformer_model = model_class.from_pretrained(pretrained_model, config=transformer_config) last_hidden_states, pooling_output =transformer_model(input_ids_tensor, attention_mask_tensor, segment_ids_tensor) The output of 'transformer_model' -- last_hidden_states, should be a Tensor. But the result is 'last_hidden_state', that means it is a string object. What's wrong with it ?
01-04-2021 06:59:01
01-04-2021 06:59:01
Hi @Twsschx In the newer version of transformers, PyTorch models have outputs that are instances of subclasses of `ModelOutput`, to access it as a tuple, you can use slicing, or to get a particular tensor, just provide the key to the output class. i.e ```python3 last_hidden_states, pooling_output =transformer_model(input_ids_tensor, attention_mask_tensor, segment_ids_tensor)[:] # slice ``` or ```python3 output =transformer_model(input_ids_tensor, attention_mask_tensor, segment_ids_tensor) last_hidden_state = output["last_hidden_states"] pooling_output = output["pooler_output"] ``` And if you want the models to output tuple like the previous versions, then pass `return_dict=False` to `forward` ```python3 last_hidden_states, pooling_output = transformer_model(**inputs, return_dict=False) ``` You can find more about output classes in this [doc](https://huggingface.co/transformers/main_classes/output.html).<|||||>Thanks. It helps me a lot!<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,394
closed
Simplify marian distillation script
Simplify marian distillation script, by adding a suggested MAX_LEN and using finetune.py directly.
01-03-2021 18:17:55
01-03-2021 18:17:55
transformers
9,393
closed
`run_glue.py` fails when using my own dataset of regression task
## Environment info - `transformers` version: 4.1.1 - Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help albert, bert, GPT2, XLM: @LysandreJik examples/token-classification: @stefan-it (Excuse me if I'm asking someone who is not in charge. I couldn't find `examples/text-classification` in the list.) ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: It seems that an error occurs when I use `run_glue.py` with my own dataset of regression task. ``` sh CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --model_name_or_path bert-base-cased \ --train_file ****.csv \ --validation_file ****.csv \ --do_train \ --do_eval \ --max_seq_length 64 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs **** \ --logging_steps **** \ --save_steps **** \ --save_total_limit **** \ --output_dir ****/v4.1.1/**** ``` An example of the train/valid CSV file is as below: ``` csv id,label,sentence1 __id_as_string__,3.0,__string__ ``` Sorry for the lack of details. I use this heavily masked notation to take into account the licensing of the dataset. You can see that the columns contain `label` and `sentence1`, and the value of `label` is `float`. I confirmed that `is_regression` is `True` in this case. The error message says: ``` sh Traceback (most recent call last): File "run_glue.py", line 419, in <module> main() File "run_glue.py", line 293, in main label_to_id = {v: i for i, v in enumerate(label_list)} UnboundLocalError: local variable 'label_list' referenced before assignment ``` It seems that the case `data_args.task_name is None` and `is_regression is True` has not been considered in the example. Excuse me if I misunderstand something. https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py#L277 ``` if ( model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id and data_args.task_name is not None and is_regression ): # Some have all caps in their config, some don't. label_name_to_id = {k.lower(): v for k, v in model.config.label2id.items()} if list(sorted(label_name_to_id.keys())) == list(sorted(label_list)): label_to_id = {i: label_name_to_id[label_list[i]] for i in range(num_labels)} else: logger.warn( "Your model seems to have been trained with labels, but they don't match the dataset: ", f"model labels: {list(sorted(label_name_to_id.keys()))}, dataset labels: {list(sorted(label_list))}." "\nIgnoring the model labels as a result.", ) elif data_args.task_name is None: label_to_id = {v: i for i, v in enumerate(label_list)} ``` When I modified the last two lines as below, I could go to the next step. May I ask you that is it the correct way to avoid the error? ``` elif data_args.task_name is None: # No definition for 'data_args.task_name is None' and 'is_regression is True'? if not is_regression: label_to_id = {v: i for i, v in enumerate(label_list)} ``` ## Expected behavior `run_glue.py` can be used for our own dataset of regression task.
01-03-2021 15:28:19
01-03-2021 15:28:19
This is the correct fix indeed (though we can group this with the previous test with `elif data_args.task_name is None and not is_regression`)! Thanks for flagging this, do you want to open a PR with the fix you found?<|||||>@sgugger Thank you for checking this issue and giving the comment. I'd love to open a PR. I'm sorry but could you please wait for a while? I think I can open it by the end of the week.<|||||>Thanks for the PR!
transformers
9,392
closed
Model inputs and outputs are ``None`` when converting fine-tuned gpt2 to Tensorflow?
Hi, I've fine tuned a distilgpt2 model using my own text using ``run_language_modeling.py`` and its working fine after training and ``run_generation.py`` script produces the expected results. Now I want to convert this to a Tensorflow Lite model and did so by using the following ```python from transformers import * CHECKPOINT_PATH = '/content/drive/My Drive/gpt2_finetuned_models/checkpoint-2500' model = GPT2LMHeadModel.from_pretrained("distilgpt2") model.save_pretrained(CHECKPOINT_PATH) model = TFGPT2LMHeadModel.from_pretrained(CHECKPOINT_PATH, from_pt=True) ``` But I dont think I'm doing this right as after conversion, when I write ```python print(model.inputs) print(model.outputs) ``` I get ``` None None ``` But I still went ahead with the TFLite conversion using : ```python import tensorflow as tf input_spec = tf.TensorSpec([1, 64], tf.int32) model._set_inputs(input_spec, training=False) converter = tf.lite.TFLiteConverter.from_keras_model(model) # FP16 quantization: converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_types = [tf.float16] tflite_model = converter.convert() open("/content/gpt2-fp16.tflite", "wb").write(tflite_model) ``` But does not work and when using the generated ``tflite`` model I get the error: > tensorflow/lite/kernels/kernel_util.cc:249 d1 == d2 || d1 == 1 || d2 == 1 was not true. Which I'm sure has something to to with my model not converting properly and getting ``None`` for input/output. Does anyone have any idea how to fix this? Thanks
01-03-2021 14:44:48
01-03-2021 14:44:48
Hey @farazk86, thanks for your issue! I've sadly never worked with tflite so not really sure how to best help you here. Maybe @jplu (or @LysandreJik) ? <|||||>> Hey @farazk86, > > thanks for your issue! I've sadly never worked with tflite so not really sure how to best help you here. Maybe @jplu (or @LysandreJik) ? Thanks @patrickvonplaten but the ``print(model.inputs)`` line is before the tflite conversion. For starters, I wanted to convert the fine-tuned distilgpt2 from pytorch to tensorflow and then to tensorflow lite. <|||||>Hello @farazk86 It is normal that the inputs/outputs are not set when using from_pretrained because they are not explicitely given when the model is built. You have to create yourself a model by setting them.<|||||>> Hello @farazk86 > > It is normal that the inputs/outputs are not set when using from_pretrained because they are not explicitely given when the model is built. You have to create yourself a model by setting them. Thanks for your reply. Is my method for converting the pretrained model to tflite wrong? I followed the code and explanation mentioned here: https://towardsdatascience.com/on-device-machine-learning-text-generation-on-android-6ad940c00911<|||||>I would also like to add that when I convert the model to tflite, using the above code I get the following warnings ``` WARNING:absl:Found untraced functions such as wte_layer_call_fn, wte_layer_call_and_return_conditional_losses, wpe_layer_call_fn, wpe_layer_call_and_return_conditional_losses, dropout_layer_call_fn while saving (showing 5 of 380). These functions will not be directly callable after loading. WARNING:absl:Found untraced functions such as wte_layer_call_fn, wte_layer_call_and_return_conditional_losses, wpe_layer_call_fn, wpe_layer_call_and_return_conditional_losses, dropout_layer_call_fn while saving (showing 5 of 380). These functions will not be directly callable after loading. ``` does this help identify the issue?<|||||>These warnings are not directly related to your issue, you can safely ignore them for now and the tutorial you linked is wrong for the TFLite creation part. Unfortunately, the current state of the TF models in transformers are not fully compliant with TFLite so, I suggest to do not push to far the conversion. It is in our plans to have a better compliancy, but we don't know when yet. You use the following piece of code to create your TFLite model: ```python from transformers import TFGPT2LMHeadModel import tensorflow as tf base_model = TFGPT2LMHeadModel.from_pretrained("gpt2") input_ids = tf.keras.layers.Input((128, ), batch_size=1, dtype=tf.int32, name='input_ids') attention_mask = tf.keras.layers.Input((128, ), batch_size=1, dtype=tf.int32, name='attention_mask') inputs = {"input_ids": input_ids, "attention_mask": attention_mask} output = base_model(inputs) model = tf.keras.models.Model(inputs=inputs, outputs=output) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.experimental_new_converter = True converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.inference_input_type = tf.float32 converter.target_spec.supported_ops = [tf.lite.OpsSet.SELECT_TF_OPS] tflite_quant_model = converter.convert() with open("model.tflite", "wb") as f: f.write(tflite_quant_model) ``` With this piece of code you should be able to convert your model into a TFLite one. Note also that the current TF models are not compliant with float16 so you have to keep with float32.<|||||>Thank you, but with the code above I am getting the following error: ``` Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference. Node number 0 (FlexIdentity) failed to prepare. ``` I even tried with ``tf-nightly`` as a google search of this error suggested that the nightly has flex delegate support. But still got the above error. The android tflite interpreter I am using works fine with all the models presented here: https://github.com/huggingface/tflite-android-transformers/tree/master/gpt2#change-the-model . They are even quantized. Would it be possible for me to train the models using a previous version of transformers? Which version was used at the time of writing the article and for providing the above tflite models? Would it help to change to that version? P.S. I would like to add that I am currently using the April 21st 2020 git version: ``!git checkout b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8`` as I needed the ``line-by-line`` parameter during training. Thank you <|||||>To make this work you have to use the current release of TF and Transformers, not below.<|||||>> To make this work you have to use the current release of TF and Transformers, not below. hmm, that would make sense why your code did not work. The reason I was using the April 21st version is because I needed the ``line-by-line`` parameter during fine tuning. Does the current version and ``run_clm.py`` have ``line-by-line`` support?<|||||>I don't think the new `run_clm.py` still supports `line-by-line`. Better to open a new issue to discuss of this.<|||||>> To make this work you have to use the current release of TF and Transformers, not below. So I am now using the current version of transformers and tf and fine-tuned a model using ``run_clm.py`` and used your above code to convert that model to tflite but still got the same error: ``` Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference. Node number 0 (FlexIdentity) failed to prepare. ``` When converting this model, I got a lot of messages in console, ``` Tensorflow version: 2.4.0 WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f2df19a0660>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: <cyfunction Socket.send at 0x7f2e091e6e58> is not a module, class, method, function, traceback, frame, or code object To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f2df19a0660>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: <cyfunction Socket.send at 0x7f2e091e6e58> is not a module, class, method, function, traceback, frame, or code object To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). WARNING:tensorflow:AutoGraph could not transform <function wrap at 0x7f2e06b7a8c8> and will run it as-is. Cause: while/else statement not yet supported To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. WARNING: AutoGraph could not transform <function wrap at 0x7f2e06b7a8c8> and will run it as-is. Cause: while/else statement not yet supported To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. WARNING:absl:Found untraced functions such as wte_layer_call_and_return_conditional_losses, wte_layer_call_fn, wpe_layer_call_and_return_conditional_losses, wpe_layer_call_fn, dropout_layer_call_and_return_conditional_losses while saving (showing 5 of 385). These functions will not be directly callable after loading. WARNING:absl:Found untraced functions such as wte_layer_call_and_return_conditional_losses, wte_layer_call_fn, wpe_layer_call_and_return_conditional_losses, wpe_layer_call_fn, dropout_layer_call_and_return_conditional_losses while saving (showing 5 of 385). These functions will not be directly callable after loading. INFO:tensorflow:Assets written to: /tmp/tmpus6vmwet/assets INFO:tensorflow:Assets written to: /tmp/tmpus6vmwet/assets The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. INFO:absl:Using new converter: If you encounter a problem please file a bug. You can opt-out by setting experimental_new_converter=False ``` comparatively, a model fine tuned using the old v2.4 version of transformers did not generate any such messages:<|||||>Those are just warnings and expected messages, this is ok. Can you try with usual `gpt2` models? If it works, the issue is certainly coming from your model. Otherwise, we will check deeper what is going wrong. Nevertheless, TFLite compliancy is not our priority for now, so if we have to fix something it will certainly be postponed to a bit later in the coming months.<|||||>Hi, I'm getting the same error as above when using the default pretrained gpt2.<|||||>Humm with Transformers from source and TF 2.4 I get no errors. What is your env?<|||||>> Humm with Transformers from source and TF 2.4 I get no errors. What is your env? I'm trying to run on android using a flutter tflite interpreter. and yes, I was also considering that maybe the fault is not in the converted model but the interpreter. To confirm this I wanted to use the python interpreter, if I can get an output here then that means that the converted models are fine and its an issue with the flutter interpreter. But when using the python tflite interpreter, everytime I invoke the model tflite generated model using your above provided code, my colab runtime crashes. I'm using current version of transformers and TF2.4 The same interpreter works for the tflite models provided here and produces output in the expected shape : https://github.com/huggingface/tflite-android-transformers/tree/master/gpt2#change-the-model Below is the code I am using for tflite interpreter ```python from transformers import * import tensorflow as tf import numpy as np tokenizer = GPT2Tokenizer.from_pretrained('gpt2') # start with their provided model, the -O will change downloaded file name to model.tflite # !wget -O model.tflite https://s3.amazonaws.com/models.huggingface.co/bert/distilgpt2-64.tflite # Encode random strings sentance = ("""You are John, a punk living in the futuristic city of Zail. You have a small xore blaster hidden in you jacket and a holoband on your wrist. You are John, a punk living in the futuristic city of Zail. You have a small xore blaster hidden in you jacket and a holoband on your wrist. You are John, a punk living in the futuristic city of Zail. You have a small xore blaster hidden in you jacket and a holoband on your wrist. You are John, a punk living in the futuristic city of Zail. You have a small xore blaster hidden in you jacket and a holoband on your wrist. You are John, a punk living in the futuristic city of Zail. You have a small xore blaster hidden in you jacket and a holoband on your wrist. You are John, a punk living in the futuristic city of Zail. You have a small xore blaster hidden in you jacket and a holoband on your wrist.""") review_token = tokenizer.encode(sentance) print(len(review_token)) review_token = np.array(review_token, dtype=np.int32) review_token = review_token[:128] review_token = np.expand_dims(review_token, axis=0) # unsqueeze to add the batch dimension print(sentance) print(review_token) tflite_interpreter = tf.lite.Interpreter(model_path='/content/model.tflite') tflite_interpreter.allocate_tensors() input_details = tflite_interpreter.get_input_details() output_details = tflite_interpreter.get_output_details() print("== Input details ==") print("name:", input_details[0]['name']) print("shape:", input_details[0]['shape']) print("type:", input_details[0]['dtype']) print("\n== Output details ==") print("name:", output_details[0]['name']) print("shape:", output_details[0]['shape']) print("type:", output_details[0]['dtype']) tflite_interpreter.set_tensor(input_details[0]['index'], review_token) tflite_interpreter.invoke() tflite_model_predictions = tflite_interpreter.get_tensor(output_details[0]['index']) print("Prediction results shape:", tflite_model_predictions.shape) ``` <|||||>Ok, thanks for sharing, I will check this once I can dedicate some time.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,391
closed
Similar usage of `past_key_values` in CausalLM and Seq2SeqLM
# 🚀 Feature request It seems GPT-2 and BartDecoder has a different style of `past_key_values`. In GPT-2, `past_key_values` is explained as below: (the explanation is from https://huggingface.co/transformers/model_doc/gpt2.html#gpt2model) ``` (parameters) past_key_values (List[torch.FloatTensor] of length config.n_layers) – Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. (returns) past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) – List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. ``` In BartDecoder and its inner BartDecoderLayer, `past_key_values` is explained and treated as below: (the explanation is from https://huggingface.co/transformers/model_doc/bart.html#bartmodel) ``` past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers with each tuple having 2 tuples each of which has 2 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) – Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding. ``` `v4.1.1 modeling_bart` https://github.com/huggingface/transformers/blob/v4.1.1/src/transformers/models/bart/modeling_bart.py ``` python # in BartDecoder for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) if output_hidden_states: all_hidden_states += (hidden_states,) dropout_probability = random.uniform(0, 1) if self.training and (dropout_probability < self.layerdrop): continue past_key_value = past_key_values[idx] if past_key_values is not None else None hidden_states, layer_self_attn, present_key_value, layer_cross_attn = decoder_layer( hidden_states, attention_mask=combined_attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_value, output_attentions=output_attentions, ) ``` ``` python # in BartDecoderLayer # Self Attention # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None # add present self-attn cache to positions 1,2 of present_key_value tuple hidden_states, self_attn_weights, present_key_value = self.self_attn( hidden_states=hidden_states, past_key_value=self_attn_past_key_value, attention_mask=attention_mask, output_attentions=output_attentions, ) hidden_states = F.dropout(hidden_states, p=self.dropout, training=self.training) hidden_states = residual + hidden_states if not self.normalize_before: hidden_states = self.self_attn_layer_norm(hidden_states) # Cross-Attention Block cross_attn_present_key_value = None cross_attn_weights = None if encoder_hidden_states is not None: residual = hidden_states if self.normalize_before: hidden_states = self.encoder_attn_layer_norm(hidden_states) # cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, past_key_value=cross_attn_past_key_value, output_attentions=output_attentions, ) hidden_states = F.dropout(hidden_states, p=self.dropout, training=self.training) hidden_states = residual + hidden_states if not self.normalize_before: hidden_states = self.encoder_attn_layer_norm(hidden_states) # add cross-attn to positions 3,4 of present_key_value tuple present_key_value = present_key_value + cross_attn_present_key_value ``` ## Motivation It seems that one of the aims of the refactoring of Bart by @patrickvonplaten https://github.com/huggingface/transformers/pull/8900 is "Allow to use BartEncoder and BartDecoder separately from the BartModel". I appreciate this very much and would love to treat `BartDecoder` as well as `gpt2`, but I feel that the difference in the handling of `past_key_values` is a barrier. In `gpt2`, `past_key_value` in `past_key_values` is `torch.tensor` with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). However, in `Bart`, `past_key_value` in `past_key_values` is `Tuple[torch.Tensor]` and `self_atten` part is not a tensor but a "2 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head))". If we want to handle `self_attn_past_key_value` in `Bart` like that in `gpt2`, is it the right way to concatename the 2 tensors in `past_key_value`? Or, is there the other correct way to treat it? Thank you in advance.
01-03-2021 05:51:03
01-03-2021 05:51:03
Hey @forest1988, In order to fully use `BartDecoder` separately from `BartModel` as a `BartForCausalLM` model, we're still waiting on this PR: #9128. And again, you're very correct in your assessment that the behavior between `BartDecoder` and `GPT2` is not fully aligned here. IMO, we should change GPT2's cache format from a single tensor of `[2, batch_size, ...,]` to `tuple([batch_size, ...])` => If you're keen feel free to open a PR for it! Actually, we could make this a "Good second issue" here. So to answer your question, no you should not contact the 2 tensors in `self_attn_past_key_value` to a single tensor, but we should rather change the code in GPT2 slightly to also have a tuple of 2 tensors instead of one tensor. In GPT2, we create a new tensor at each iteration when using `use_cache` here: https://github.com/huggingface/transformers/blob/d944966b19a4d6860bddc7cdc1ba928ca8a0da91/src/transformers/models/gpt2/modeling_gpt2.py#L235 => this is a bit unnecessary IMO. When the inputs are getting longer allocating new memory for `key` and `value` can actually lead to a small slow-down IMO. If instead we would just use a tuple => `present = (key, value)` we would not allocate new memory. So 1) As soon as #9128 is merged you can use `BartForCausalLM` the same way as `GPT2` without having to change anything. 2) Let's see if someone is interested in tackling this "inconsistency" issue in GPT2. This "First good issue" should replace this line: https://github.com/huggingface/transformers/blob/d944966b19a4d6860bddc7cdc1ba928ca8a0da91/src/transformers/models/gpt2/modeling_gpt2.py#L235 with ```python present = (key.transpose(-2, -1), value)) ``` (I think it should actually be that simple) <|||||>Hi @patrickvonplaten, Thank you for answering this issue! I'm sorry I haven't checked the PR https://github.com/huggingface/transformers/pull/9128 before creating this issue. I'll check it! And, thanks for telling me your opinion about the need to change GPT2's cache format from a single tensor to a tuple of 2 tensors. I'd love to open a PR, but I'm afraid I don't have enough time now. I will work on it as soon as I find the time, but of course, if someone else who is interested in the same issue would like to work on it, I would appreciate that! I now understand that I should not contact the 2 tensors in `self_attn_past_key_value` to a single tensor, but rather the code in GPT2 should be changed. I'm looking forward to seeing the PR https://github.com/huggingface/transformers/pull/9128 is merged. Also, I would like to think about how to avoid contacting the 2 tensors in `self_attn_past_key_value` for what I am currently working on. Thank you so much! <|||||>Hi, I've started fixing the issue but failed in some tests. ``` ========================================================================================= short test summary info ========================================================================================== FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_sample_generate - AttributeError: 'tuple' object has no attribute 'index_select' FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_generate - AttributeError: 'tuple' object has no attribute 'index_select' FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_gradient_checkpointing - TypeError: CheckpointFunctionBackward.forward: expected Tensor or tuple of Tensor (got tuple) for return value 1 FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_group_beam_search_generate - AttributeError: 'tuple' object has no attribute 'index_select' FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_model_parallel_beam_search - AttributeError: 'tuple' object has no attribute 'index_select' FAILED tests/test_trainer_distributed.py::TestTrainerDistributed::test_trainer - OSError: [Errno 12] Cannot allocate memory ================================================================== 6 failed, 4626 passed, 834 skipped, 642 warnings in 1766.16s (0:29:26) ================================================================== ``` I think this may be related to the `index_select` used in [generation_utils.py]( https://github.com/huggingface/transformers/blob/143289dcf759a663c03317e30167e89ee6d86588/src/transformers/generation_utils.py). I will continue to look into this when I have more time.<|||||>Hi @patrickvonplaten, Now that I have time, I'm thinking about how to fix this issue so that the testing part works well. I think where to modify in `generation_utils.py` is here: https://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/generation_utils.py#L506-L517 `past` is taken from `past_key_values` or other output variations in: https://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/generation_utils.py#L477-L489 Then, it is treated like this: https://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/generation_utils.py#L1654-L1655 Can I modify #L506-L517 so that `past` is replaced from `Tuple[torch.Tensor]` to `Tuple[Tuple[torch.Tensor]]`, or should I consider other output variations, `output.mem` and `outputs.past_buckets_states`? Thank you. <|||||>Hey @forest1988, Thanks for you in-detail code snippet. I think the easiest solution would be to open a PR showcasing the required changes :-) I think you're right `Tuple[torch.Tensor]` should indeed be `Tuple[Tuple[torch.Tensor]]`. Would you be interested in opening a PR so that we can add the fixes there? <|||||>Hi @patrickvonplaten, Thank you for your comment! After doing some additions to change `Tuple[torch.Tensor]` to `Tuple[Tuple[torch.Tensor]]`, I would like to open a PR and ask you all to add fixes. I'll open the PR in a few days!<|||||>I'm sorry to keep you waiting. I have opened a PR #9596 for this issue. I marked the PR as WIP because it has not yet been resolved. I will continue to look into this issue myself in the future and any advice would be greatly appreciated.
transformers
9,390
closed
[trainer] self.model_wrapped + _model_unwrap
This PR adds: * [x] adds `self.model_wrapped` - to have access to the outmost module regardless of how many times it was wrapped (e.g. under DeepSpeed there is a double wrapping ` DDP(Deepspeed(Transformers Model))`) * [x] makes sure that `self.model` is always set to the normal model * [x] fixes a bug where under `model_parallel` `self.model` was not set (twice)! * [x] simplifies the `model_init` checking logic * [x] replaces `_actual_model` which couldn't handle multiple wrapping levels with `_model_unwrap` which can and integrate it Please ignore the small mentions of DeepSpeed, this PR is split of from https://github.com/huggingface/transformers/pull/9211 to get all the non-DeepSpeed related changes into a separate review to make things a bit easier on the reviewers as it was suggested by @sgugger. This PR was made by copying from the other PR and manually removing all the added deepspeed code. If possible, let's get it in asap so that I could rebase and we could move on with the DeepSpeed PR. Thank you very much! @sgugger
01-03-2021 05:46:15
01-03-2021 05:46:15
This should be good now, @sgugger - thanks a lot for all the suggestions!<|||||>@LysandreJik GitHub Reviewers is down, so tagging you instead.
transformers
9,389
closed
[trainer] self.model_wrapped + _model_unwrap
This PR adds: * [x] self.wrapped https://github.com/huggingface/transformers/pull/9211 in progress
01-03-2021 05:37:07
01-03-2021 05:37:07
transformers
9,388
closed
Conditional Generation using input_embeds instead of input_ids
Hi @patrickvonplaten! When using input_embeds instead of input_ids as inputs to the BartForConditionalGeneration model, I am not able to generate the result. Could you please take a look? The same code works with GPT2 in place of Bart. Thanks! Here is the script ``` import torch from transformers import BartForConditionalGeneration, BartTokenizer model_path = "facebook/bart-large" model = BartForConditionalGeneration.from_pretrained(model_path, output_hidden_states=True) tokenizer = BartTokenizer.from_pretrained(model_path) text = "I disapprove of what you <mask> , but" input_ids = tokenizer.encode_plus(text, return_tensors='pt')['input_ids'] with torch.no_grad(): x = model.get_input_embeddings()(input_ids).squeeze() model(inputs_embeds = x) ``` Here is the Traceback I got ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-21-e17be5532b10> in <module>() 18 x = model.get_input_embeddings()(input_ids).squeeze() 19 ---> 20 model(inputs_embeds = x) 21 4 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1244 output_attentions=output_attentions, 1245 output_hidden_states=output_hidden_states, -> 1246 return_dict=return_dict, 1247 ) 1248 lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 1079 # -> is this used for backward compatibility 1080 if decoder_input_ids is None and decoder_inputs_embeds is None: -> 1081 decoder_input_ids = shift_tokens_right(input_ids, self.config.pad_token_id) 1082 1083 output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions /usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_bart.py in shift_tokens_right(input_ids, pad_token_id) 67 Shift input ids one token to the right, and wrap the last non pad token (usually <eos>). 68 """ ---> 69 prev_output_tokens = input_ids.clone() 70 71 assert pad_token_id is not None, "self.model.config.pad_token_id has to be defined." AttributeError: 'NoneType' object has no attribute 'clone' ``` - `transformers` version: 4.1.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): 2.4.0 (True)
01-03-2021 03:13:24
01-03-2021 03:13:24
Hey @frankgandiao, Note that Encoder-Decoder models usually require both `input_ids` and `decoder_input_ids`. Bart is special in a sense that it can automatically create the `decoder_input_ids` from the `input_ids` if you **don't** provide the `decoder_input_ids`. However, the model is not able to automatically create the `decoder_inputs_embeds` from the `inputs_embeds` if you provide only the `inputs_embeds` => to solve your problem you should provide the `decoder_inputs_embeds` as well. What you could do is the following: ```python from transformers.models.mbart.modeling_mbart.py import shift_tokens_right input_ids = tokenizer(text, return_tensors='pt')['input_ids'] decoder_input_ids = shift_tokens_right(input_ids, tokenizer.pad_token_id) inputs_embeds = model.get_input_embeddings()(input_ids).squeeze() decoder_inputs_embeds = model.get_input_embeddings()(decoder_input_ids).squeeze() model(inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds) ```<|||||>Hey @patrickvonplaten, Thanks for your reply! That makes sense and the problem is resolved!
transformers
9,387
closed
Where is the impact when output_attentions=True?
Is there any impact regarding performance (training/fine-tuning time, GPU memory, batch size, etc.) when `output_attentions=True`? ```python self.bert_encoder = BertModel.from_pretrained( hparams.architecture, # "bert-base-uncased" output_attentions=True) ```
01-02-2021 23:16:57
01-02-2021 23:16:57
If `output_attentions=True` memory consumption should increase significantly (for large `sequence_length`) since we now store all attentions of size (`batch_size`, `num_heas`, `sequence_length`, `sequence_length`). This is less significant in training since the stored activations for training consume most RAM anyways. Speed should not be really affected by this.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,386
closed
replace apex.normalization.FusedLayerNorm with torch.nn.LayerNorm
This PR proposes to drop `apex.normalization.FusedLayerNorm` in favor of faster `torch.nn.LayerNorm`. 1. For performance and background details please see the discussions in https://github.com/huggingface/transformers/issues/9377 2. It's also needed for https://github.com/huggingface/transformers/pull/9384 since `apex.normalization.FusedLayerNorm` corrupts data under model parallel https://github.com/NVIDIA/apex/issues/1022 Fixes: #9377 @LysandreJik, @sgugger, @patrickvonplaten
01-02-2021 20:48:13
01-02-2021 20:48:13
Merging since it's blocking #9343 .
transformers
9,385
closed
[logging] autoflush
This PR proposes to: * auto-flush `transformers` logging When using logging for tracing signals from different parts of the code and which could be mixed with print debug this aids to get all the logging events synchronized. I don't think this change will introduce any performance impacts. If it helps someone here is the code I used to sync `transformers` logging with various other debug prints. I was porting bart to MP and I needed to trace that the device switching happens correctly and I added a bunch of `logger.info` calls inside `modeling_bart.py` and also had some other helpers `print` debug messages which weren't logger based: ``` # auto flush std streams from sys import stdout, stderr def stdout_write_flush(args, w=stderr.write): w(args); stderr.flush() def stderr_write_flush(args, w=stderr.write): w(args); stderr.flush() stdout.write = stdout_write_flush stderr.write = stderr_write_flush from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig import logging import transformers.utils.logging import transformers.models.bart.modeling_bart # I wanted a shorter simpler format handlers = transformers.utils.logging._get_library_root_logger().handlers for handler in handlers: formatter = logging.Formatter("[%(funcName)s] %(message)s") handler.setFormatter(formatter) transformers.models.bart.modeling_bart.logger.setLevel(transformers.logging.INFO) # then all the model creation and generate() goes next ``` @LysandreJik, @sgugger, @patrickvonplaten
01-02-2021 20:19:23
01-02-2021 20:19:23
transformers
9,384
closed
[model parallelism] Bart goes parallel
This PR implements model parallelism (MP) in Bart. This is the latest incarnation of generalization of the MP in `transformers`, based on @alexorona's original work. I have done some of it already in https://github.com/huggingface/transformers/pull/9323 and this PR builds upon the other one. It's slightly complex what to merge when, but this PR is independent and can be merged on its own. For reviewers I propose to read things in this order: 1. https://github.com/huggingface/transformers/pull/9316 2. https://github.com/huggingface/transformers/pull/9323 3. this PR 4. Additional important design discussions https://github.com/huggingface/transformers/issues/8771 If all is in agreement, I propose: 1. &#9744; merging this PR first, 2. &#9744; then I'll backport the new code from this PR to https://github.com/huggingface/transformers/pull/9323 and we merge that. 3. &#9744; then we handle gpt2, which I haven't touched yet. Perhaps @alexorona could help there if his time permits or one of us. 4. &#9744; complete Bart's other heads (can be item 3) and `deparallelize` - the latter is not really needed in practice so will handle those when dust around design settles. 5. &#9744; add Bart to trainer's supported for `--model_parallel` flags 6. &#9744; write tests for `model_parallel_utils.py` 7. &#9744; meanwhile we can polish the concept of device maps which will require a review of all architectures `transformers` has implemented. Actually first we need to merge smaller bits: 1. https://github.com/huggingface/transformers/pull/9347 2. https://github.com/huggingface/transformers/pull/9386 --------- So this PR: * [x] Implements MP in Bart based on discussions in all of the threads/PRs listed above. Only `BartForConditionalGeneration` at the moment while we are sorting out the API. But the bulk of the work is done, since `BartModel` has all in place. * [x] switches to the concept of `main_device` rather than `(first|last)_device` so the first device of encoder becomes the main_device and almost everything happens there (`embeddings`, `lm_head`, etc), and other devices are used exclusively for encoder and decoder purposes. * [x] switches to a more explicit `device_map` that can support non-symmetrical models (different number of layers in encoder and decoder). It can also handle different types of maps. See the demo at the end this post for details. * [x] further improves the magical `to()` functions that can operate on any type of variable except opaque objects. Can be used to put the inputs on the correct devices either automatically via a `forward` decorator or explicitly inside `forward`. We could use either or both. * [x] adds a bunch of debug functions that make it easy to trace device IDs of variables, params and whole layers. * [x] further improves the device map validation function * [x] improves tests * [x] needs to remove apex.normalization.FusedLayerNorm as it's buggy under MP (corrupts data) per https://github.com/huggingface/transformers/issues/9377 a dedicated to removal PR is https://github.com/huggingface/transformers/pull/9386 Here is a quick demo (you will need 2 gpus to run it): ``` from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig #mname = "sshleifer/tinier_bart" mname = "sshleifer/distilbart-xsum-6-6" model = BartForConditionalGeneration.from_pretrained(mname) tokenizer = BartTokenizer.from_pretrained(mname) sentences = ["I'm sitting here in a boring room. It's just another rainy Sunday afternoon. I'm wasting my time I got nothing to do. I'm hanging around I'm waiting for you. But nothing ever happens. And I wonder."] inputs = tokenizer(sentences, max_length=1024, return_tensors='pt', truncation="longest_first") device_maps_flat = { "sshleifer/tinier_bart": { "encoder": {0: [0, 1] }, "decoder": {1: [0] }, }, "sshleifer/distilbart-xsum-6-6": { "encoder": {0: [0, 1, 2, 3, 4, 5] }, "decoder": {1: [0, 1, 2, 3, 4, 5] }, }, } device_maps_split = { "sshleifer/tinier_bart": { "encoder": {0: [0], 1: [1], }, "decoder": {1: [0] }, }, "sshleifer/distilbart-xsum-6-6": { "encoder": {0: [0, 1, 2], 1: [3, 4, 5], }, "decoder": {0: [0, 1, 2], 1: [3, 4, 5], }, }, } # 3 different ways (2 different device maps and 1 autogenerated device map) model.parallelize() # autogenerated #model.parallelize(device_maps_flat[mname]) #model.parallelize(device_maps_split[mname]) inputs = inputs.to("cuda:0") # Generate Summary summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=25, early_stopping=True) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) # prints: [" I'm sitting in a room where I'm waiting for something to happen."] ``` You can see from the demo, that when calling `model.parallelize` you can skip the `device_map` arg altogether and the model will generate the right one. Or you can provide one that: 1. gives some gpus exclusively to encoder and others to decoder 2. splits the model horizontally so that the encoder uses all gpus so so the decoder the model transparently handles all the remappings Note, the user still needs to put the data on the `main_device`, so perhaps that will eventually become not hardcoded via: ``` # inputs = inputs.to("cuda:0") inputs = inputs.to(model.main_device) ``` As we have been discussing elsewhere the device map format is unstable yet. So I propose we document it as unstable yet, but the users can rely on the autogenerated device map which requires no input from the user (i.e. calling `model.parallelize() ) - if it changes it'll happen transparently for the user. Also note that in situations of Trainer-based scripts, like `finetune_trainer.py`, the user has no way to supply such device map at the moment so in effect the model generates the map on the fly as in the above para. Fixes: #8344 @LysandreJik, @patrickvonplaten, @sgugger, @alexorona
01-02-2021 20:09:18
01-02-2021 20:09:18
That looks great! Model parallelism would be very nice for Bart. We should coordinate here a bit with all the open PRs. I'm also more or less done with the big "split-bart-into-separate-models" PR: https://github.com/huggingface/transformers/pull/9343. Think the merge conflicts can become a bit painful here :D. I'd propose the following: -Merge: #9347, #9386 (they should be pretty trivial to merge) -Rebase and Merge the big Bart refactor (https://github.com/huggingface/transformers/pull/9343) -Discuss/Merge the "new" model parallel design: #9316 and #9323 -Rebase and Discuss/Merge this PR<|||||>Is this PR ready for review? There's a lot of comments that were probably here for debugging purposes. Let me know if you want a review or if we should come back to it after #9347 and #9386 have been merged.<|||||>It's very ready, functionality/concept-wise. It's not ready 100% commentary, debug traces, etc. but that's very unimportant until the rest is sorted out, since there are multiple overlapping PRs happening. Because of the holidays there is a lot of new code which is all inter-dependent and unreviewed and then there is a huge change to merge of #9343. So I think it's the best to review it as it is - sort things out and then once everybody is happy with the logic, and #9343 I will most likely have to do a new PR anyway. But I need your feedback that what I did is correct. Think of it as an ongoing code design and generalization PR. Thanks. <|||||>@patrickvonplaten, your plan works for me. Please ping me when #9343 is in and you're happy with the outcome so that it'll be safe to add MP w/o creating conflicts. Thank you. But as I commented above this blocking event doesn't have to interefere with this PR's review - we are just not going to merge it, but it should proceed as normal and I will take all the agreed upon changes to the new PR once the dust around Bart split settled down. <|||||>@stas00 @LysandreJik @patrickvonplaten This PR introduces a `device_map` that is not backwards compatible with 4.1.0. We have to do that at some point (as @stas00 discovered), but let's not have three different versions. We really need to make sure that we have consensus on the final form of the `device_map` that will work for all models going forward or we will have to change it again when model parallelization is generalized and some of its functionality is placed in `PreTrainedModel`. Have you tested this on `gpt2`, @stas00 and is the code generalizable to models that don't have decoder architectures and can store their attention blocks in attributes like `self.h`? Has everyone read [this comment](https://github.com/huggingface/transformers/pull/9323#issuecomment-753518885)? Are we all on board for the plan to generalize model parallelism? Don't have to implement it now, but we need to make sure we've thought through any changes that affect user experience and backward compatibility. Sorry, I'm in the middle of moving so not keeping close track of all the traffic and could easily have missed something. Also, this content is spread across several PRs, so sometimes I'm getting confused.<|||||>@alexorona, I'm basically leaving all the old code in place, so that gpt2 works as is and t5 as is, so this PR only impacts Bart. And in any case it doesn't look like this PR will be merged since Bart went through a split https://github.com/huggingface/transformers/pull/9343, which isn't finalized yet and I will need to re-do it anyway. But it's no problem, since I know what to do. And see the end of this comment - the whole MP implementation might need to be redesigned altogether. Since there are so many moving parts, it's very difficult to manage things and definitely makes things difficult for reviewers. So my intention was to merge each of the new things separately, while keeping the old code working and then to start integrating things in bit. The holidays made things pile up, but since the HF team is back I trust in the next few days we will form a plan. Important notes: 1. MP is new here and should be clearly marked as an experimental feature. Which means device maps are not fixed and can change at any moment. https://github.com/huggingface/transformers/pull/9412 What we could commit to is having the default device map work - i.e users don't supply any device map and then it just works. That's why I propose we start with each model implementing its own device map format (while sharing bits with common code where possible) and then over time we will find a common format. If the HF team wants to allocate time then we need to sit down, look at all the models and decide on the format ahead of time. If I'm not mistaken it looks like at the moment it's just @alexorona and I that mostly understand what's going on, so it'd be great to have someone from HF to get on top of MP. I'd be happy to sit down with that person and explain what I learned in the last few weeks in person. It's not complicated. 2. As it was just pointed out https://github.com/pytorch/pytorch/issues/49961#issuecomment-754342632 this implementation is highly inefficient since it doesn't take advantage of the idle gpus, so we might have to scratch a big part of it and re-implement it using PP or something similar. The current implementation just uses extra gpus to expand available memory, but doesn't take advantage of the extra hardware. Until then we have deepspeed integration [almost ready](https://github.com/huggingface/transformers/pull/9211) and `sharded_ddp` should be available in the next few days, so users will have excellent ways to fit huge transformers models on limited hardware already. So let's not rush with MP here and think. <|||||>From what I understand, model parallelism as it's currently implemented is a naive implementation of what it's supposed to do: offer more memory so that bigger models may be trained using memory of several devices instead of a single device. It is indeed inefficient as devices as idle while others compute, so there's definitely a way of making it more efficient. @stas00, @alexorona, if you could walk us through what you have learned so that @patrickvonplaten, @sgugger and myself can understand the different options available, that would be great. Some thoughts to structure our approach towards MP: - You mention pipeline parallelism (PP) as a way to be more efficient than model parallelism (MP), as the idle devices can be used while other compute. This intuitively seems like an algorithm to set up during training, do you think we would have to modify the models themselves like what is currently done with model parallelism? - As noted by @sgugger and approved by @patrickvonplaten and myself, working on the MP API of the current models (GPT-2 and T5) is a better test-bed than trying to make it work for all models all at once. Most models are similar, and finding a generic approach (if possible!) should be feasible with just these two models for now. - You're right that we should not rush it, and take our time to understand what we can do best for both inference and training.<|||||>@LysandreJik No, it's not a naïve implementation of model parallelism. In addition to **data parallelism** and **model parallelism**, there is **pipeline parallelism**, which is the next level of complexity along with **zero redundancy**. Model parallelism allows us to train bigger models on GPU. Pipeline parallelism + model parallelism would allow us to train these large models faster because the GPUs are not idle. I really think the next step is to make sure model parallelism is generalized and rely on a library -- probably deepspeed -- to implement pipeline parallelism and zero redundancy. deepspeed has something called **3D parallelism**, which I believe is a form of pipeline parallelism. @stas00 is that correct? From my understanding, deepspeed has three major enhancements: - 3D parallelism - zero-redundancy that reduces the GPU memory footprint of a given module - some support for clusters, but I'm hazy on the details **Practical feature implications:** We can currently train t5-11b -- I believe the largest model in the library -- in a practical and affordable amount of time on the newest cloud instances. There are three benefits to pursuing pipeline parallelism and zero redundancy libraries: - Users could train large models faster - Users could train large models on more modest hardware - We would be prepared for the eventual release of even larger models in the 20 billion and potentially up to 100 billion parameter range<|||||>Some notes following up to the raised issues: - I need to study and experiment before I'm able to answer a lot of the questions you have been asking. For example one of the important questions @alexorona asks is whether the idling GPUs can be utilized to a high capacity by integrating other libraries like deepspeed. I will be able to answer that once I try that. - The "naive" part @LysandreJik referred to is that, say, you spread the model over 8 gpus - 7 gpus will be idling most of the time, so it'd a terribly expensive training as you would be paying per gpu and not per its utilization. So while the current solution works there must be a more efficient ways to do that. One much more complex solution suggested here: https://github.com/pytorch/pytorch/issues/49961#issuecomment-754306157 is with the RPC mechanism. Again, I don't have any experience with it, so I will eventually get to try it and comment back. - DeepSpeed's solution to huge model size is ZeRO - while it says it can support models implementing MP, it says it's not needed since we have a working solution (100B param model w/o needing MP) and my experiments showed that with sharded DDP on my weird hardware setup I can fit 3x more data, and with DeepSpeed 3-5x, and that's just with some default config. - We are on the same page wrt to making things working on a few models - t5, gpt2 and bart is ready too. Note that Bart is a better candidate than t5 because it can be asymmetrical wrt encoder/decoder-size - so it's slighly more complex (but not by much). We were discussing a specific issue of `device_map` design, which requires us to look at all models. But that's where it can stop. My plan is to finish the DeepSpeed integration - almost there and then look into Pipelines next. Of course, nobody needs to wait for me, I'd be just as happy for others to experiment and teach me instead ;) I commented on the current design so that the HF team better understand what we have here: https://github.com/huggingface/transformers/issues/8771#issuecomment-755113545 Let's keep the design discussion focused in one thread, otherwise we are all over multiple threads... doesn't matter which - just pick one... If you have questions or need for clarifications please don't hesitate to ask. <|||||>I rebased on https://github.com/huggingface/transformers/pull/9343 so now it's no longer possible to develop anything on Bart - the check fails because it wants all copy-cats to be the same: ``` python utils/check_copies.py Traceback (most recent call last): File "utils/check_copies.py", line 305, in <module> check_copies(args.fix_and_overwrite) File "utils/check_copies.py", line 166, in check_copies raise Exception( Exception: Found the following copy inconsistencies: - src/transformers/models/pegasus/modeling_pegasus.py: copy does not match models.bart.modeling_bart.BartAttention at line 141 - src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartAttention at line 140 - src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartEncoderLayer at line 275 - src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartDecoderLayer at line 331 - src/transformers/models/blenderbot_small/modeling_blenderbot_small.py: copy does not match models.bart.modeling_bart.BartAttention at line 124 - src/transformers/models/blenderbot_small/modeling_blenderbot_small.py: copy does not match models.bart.modeling_bart.BartEncoderLayer at line 259 - src/transformers/models/blenderbot_small/modeling_blenderbot_small.py: copy does not match models.bart.modeling_bart.BartDecoderLayer at line 315 - src/transformers/models/mbart/modeling_mbart.py: copy does not match models.bart.modeling_bart.BartAttention at line 133 - src/transformers/models/blenderbot/modeling_blenderbot.py: copy does not match models.bart.modeling_bart.BartAttention at line 126 Run `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them. make: *** [Makefile:25: extra_quality_checks] Error 1 ``` How do I move forward with my work then? I suppose the only way to proceed is to drop Bart and use one of the derivatives? So Bart isn't going MP... @patrickvonplaten, @sgugger <|||||>That's also why we should pause the BART PR for MP and make sure the general API is solid enough. Any change in BART will impact all related models (that was true before the split, since the other models were subclasses) so the same PR will need to do BART/Pegasus/mBART/marian etc. And probably the ses2seq template. So better make sure we're happy with the design on a model independent from the others like GPT-2 or T5 first :-) <|||||>> I rebased on #9343 so now it's no longer possible to develop anything on Bart - the check fails because it wants all copy-cats to be the same: > > ``` > python utils/check_copies.py > Traceback (most recent call last): > File "utils/check_copies.py", line 305, in <module> > check_copies(args.fix_and_overwrite) > File "utils/check_copies.py", line 166, in check_copies > raise Exception( > Exception: Found the following copy inconsistencies: > - src/transformers/models/pegasus/modeling_pegasus.py: copy does not match models.bart.modeling_bart.BartAttention at line 141 > - src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartAttention at line 140 > - src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartEncoderLayer at line 275 > - src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartDecoderLayer at line 331 > - src/transformers/models/blenderbot_small/modeling_blenderbot_small.py: copy does not match models.bart.modeling_bart.BartAttention at line 124 > - src/transformers/models/blenderbot_small/modeling_blenderbot_small.py: copy does not match models.bart.modeling_bart.BartEncoderLayer at line 259 > - src/transformers/models/blenderbot_small/modeling_blenderbot_small.py: copy does not match models.bart.modeling_bart.BartDecoderLayer at line 315 > - src/transformers/models/mbart/modeling_mbart.py: copy does not match models.bart.modeling_bart.BartAttention at line 133 > - src/transformers/models/blenderbot/modeling_blenderbot.py: copy does not match models.bart.modeling_bart.BartAttention at line 126 > Run `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them. > make: *** [Makefile:25: extra_quality_checks] Error 1 > ``` > > How do I move forward with my work then? I suppose the only way to proceed is to drop Bart and use one of the derivatives? So Bart isn't going MP... > > @patrickvonplaten, @sgugger I agree with @sgugger that it would be better to just work on TF and GPT2 until we have a solid API for now...But in general the idea is to implement the feature in Bart and then run `make fix-copies` and all other models are updated automatically. In case you add a lot of code to Bart (outside of `BartAttention`) it can very well be that this code has to be manually copied inside the other models as well (happy to help then :-) )<|||||>And big sorry for making this PR so much harder for you now! But that Bart split had to happen sooner or later<|||||>> And big sorry for making this PR so much harder for you now! But that Bart split had to happen sooner or later Surprisingly, the rebasing was super-simple. So it wasn't a hurdle at all.<|||||>1. Bart and t5 aren't exactly the same, so in order to generalize a variety of models is needed. 2. And this PR is much further ahead than t5, albeit I can spend more time merging it back into t5. If I switch to one of the original subclasses, say, MBart, and work with it instead - will the copy-checker complain just the same?<|||||>> If I switch to one of the original subclasses, say, MBart, and work with it instead - will the copy-checker complain just the same? I'm afraid so, unless you remove all `# Copied from` comments, but that defeats the purpose.<|||||>Understood. thank you! It sounds like this change will make future development of the bart family somewhat painful. Since the developer will have to constantly sync multiple files with their new development and it won't help the reviewers since now there will be multiple duplicated diffs. It'd be much more useful to run the check/sync periodically or at will, rather than enforcing them on each `make style`, IMO. I guess time will tell.<|||||>Thinking more about the situation - the thing is - this PR works - I put a ton of work into it - users can start using MP with the Bart family yesterday, e.g. with `--model_parallel` flag in trainer - we don't have to expose the unstable device map and using the internal default device map is sufficient for most simple uses. And if we change to a different more efficient implementation down the road - it'd be totally transparent to the users. And if it's not trainer, they can just use `model.parallelize()` without the device map, or use the device map but know it may change down the road. I'd just need to enable `self.is_parallelizable` that was just added and clean up a bit. But it's your call.<|||||>> e.g. with --model_parallel flag in trainer That's one of the thing to clean up: this flag is not necessary with the current API: we can detect if a model is parallelized and avoid a confusion with the name. I'm not saying we should throw this PR in the thrash, just that it should be paused until we have had time to do all clean up we want.<|||||>>> e.g. with --model_parallel flag in trainer > > That's one of the thing to clean up: this flag is not necessary with the current API: we can detect if a model is parallelized and avoid a confusion with the name. Do tell more? Are you planning to launch MP just because a model supports it? It sounds that you are considering dropping the `--model_parallel` cl arg in trainer Or are we talking about different things? > I'm not saying we should throw this PR in the thrash, just that it should be paused until we have had time to do all clean up we want. tldr; 1. **I'm fine with hitting the pause button as you suggested.** 2. this is a fully functional implementation - so **you actually can send users to this PR branch if they want to use MP with Bart** (as the family has been cast out after I rebased on master, it will require quite some work to re-add it to other Bart-like models). the full story: The issue is simple. Is that things are complicated. This PR overlaps with https://github.com/huggingface/transformers/pull/9323 - both have multiple changes and improvements, and I have already documented and commented on each one of the changes in both PRs, well actually 3 PRs (this one too https://github.com/huggingface/transformers/pull/9316), so leaving such code spread out over several PRs is a recipe for a huge mess down the road. It all came to be as I was working over the holidays and wasn't getting feedback (No complaints, I'm just explaining how it came to be.). As a result of it I was working on new changes but with Bart so that I could see how to generalize better. Not knowing what you'd decide I tried to leave the existing code without any API changes, hence the separate independent PRs. The bottom line is this. Regardless of whether the current implementation is efficient or not, it works. And any future more efficient implementation will use the same API on the user-side (or perhaps something more complicated) - at the moment its just one command to turn the feature on. So you actually can send users to this PR branch if they want to use MP with Bart-only. So the other approach I can take is to merge parts of this PR into t5-mp PR https://github.com/huggingface/transformers/pull/9323, but it'll be again a lot of work and nobody has even looked at any of those PRs... But then we are talking about perhaps finding a more efficient solution, and perhaps deepspeed will render a lot of it pointless anyway... (Alex thinks not.) So why waste reviewers' time... makes sense not to. So yes, let's freeze this up and I go back to work on deepspeed. I have convinced myself it's the right thing to do and you got to hear my inner talk. Just remember it's totally functional in case someone needs it. Thank you for reading. <|||||>As t5 MP is broken in the trainer, I needed to see if it was the same with my Bart MP port - but it works: ``` rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path sshleifer/distilbart-xsum-6-6 --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 1 --n_train 2 --n_val 2 --n_test 2 --do_predict --task summarization --data_dir xsum --model_parallel ``` So with this PR you **can** use `--model_parallel` automatically with out trainer scripts with Bart models. <|||||>As I was trying to see if I can find a way to utilize the idling GPUs, I run these benchmarks - haven't found anything useful yet, but the interesting finding is that while we get a huge performance hit with evaluation and beam size > 1, actually the training time is faster than non-MP version, despite all the data copying This PR beats master on training time almost by half 8.6sec vs 15.8 sec, but of course it has 2 gpus vs 1 gpus!!! But it beats even the DDP solution 10.6sec by 20%! So perhaps there is something good in here we just need to understand why is it faster than DDP. Unfortunately I have an uneven GPUs setup, so it's hard to get very useful benchmarks. Perhaps someone with 2 identical GPUs could re-run these and report back. For posterity here are the results I'm getting with 1x 8gb and 1x 24gb gpus: ``` # w/o MP w/o DDP rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path sshleifer/distilbart-xsum-6-6 --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 1 --n_train 200 --n_val 200 --task summarization --data_dir xsum 2021-01-10 16:57:43 | INFO | __main__ | train_runtime = 15.8407 2021-01-10 16:58:02 | INFO | __main__ | val_runtime = 19.0772 # w/o MP w/ DDP rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/distilbart-xsum-6-6 --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 1 --n_train 200 --n_val 200 --task summarization --data_dir xsum 2021-01-10 16:58:42 | INFO | __main__ | train_runtime = 10.6299 2021-01-10 16:58:53 | INFO | __main__ | val_runtime = 11.4454 # w/ MP w/o DDP rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path sshleifer/distilbart-xsum-6-6 --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 1 --n_train 200 --n_val 200 --model_parallel --task summarization --data_dir xsum 2021-01-10 16:49:00 | INFO | __main__ | train_runtime = 8.6264 2021-01-10 16:51:14 | INFO | __main__ | val_runtime = 134.0955 runtime is very slow due to beam search (==4). same w/ --eval_beams 1 2021-01-10 16:56:10 | INFO | __main__ | train_runtime = 8.657 2021-01-10 16:56:41 | INFO | __main__ | val_runtime = 31.4318 # w/ MP w/ DDP rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/distilbart-xsum-6-6 --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 1 --n_train 200 --n_val 200 --model_parallel --task summarization --data_dir xsum this doesn't work: can't mix this implementation of MP w/ DDP AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device GPU modules, but got device_ids [0], output_device 0, and module parameters {device(type='cuda', index=0), device(type='cuda', index=1)}. ``` <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>too long. closing.<|||||>Hello, @stas00 is there any update on BART based model parallelism? also about model.parallelize() for BlenderBot? Thanks. <|||||>This line of work has been abandoned as it's highly inefficient. Please use DeeepSpeed which works with any model https://huggingface.co/docs/transformers/main/main_classes/deepspeed
transformers
9,383
closed
[Marian] Doc says `config.add_bias_logits=True`, but config is has `config.add_bias_logits=False`
**Question**: In the docs, it is written that Marian has (contrary to Bart), `config.add_bias_logits=True`: https://huggingface.co/transformers/model_doc/marian.html#implementation-notes. But when looking into the code: https://github.com/huggingface/transformers/blob/b01f451ca38695c60175b34d245997ef4d18231d/src/transformers/models/marian/configuration_marian.py#L25 Marian has the exact same default config as Bart and also Marian's config files online have `config.add_bias_logits=False` - see: https://huggingface.co/Helsinki-NLP/opus-mt-en-de/resolve/main/config.json @sshleifer @patil-suraj Is the documentation not up-to-date anymore? Because all the slow tests are passing....
01-02-2021 16:17:51
01-02-2021 16:17:51
I think what's going on is that `config.add_final_bias_logits` is unused. In modeling_bart.py line 148 we call ```python self.register_buffer("final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings))) ``` regardless of the config, and then if it's in the state dict it will get loaded by `from_pretrained`. I do think that `final_bias_logits` is in the marian state dict, as this line would have `KeyError`'d during conversion otherwise: https://github.com/sshleifer/transformers_fork/blob/121ec9dced3d068352078e7c3523ecd66830e39e/src/transformers/models/marian/convert_marian_to_pytorch.py#L461-L461 <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,382
closed
[docs] Fix TF base model examples: outputs.last_hidden_states -> state
# What does this PR do? Fixes a typo in the examples of Tensorflow-based base models, in which the returned last_hidden_state attribute of model output is incorrectly listed as "last_hidden_states". Fixes #9376 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @julien-c @patrickvonplaten
01-02-2021 15:14:17
01-02-2021 15:14:17
transformers
9,381
closed
[Docs] `past_key_values` return a tuple of tuple as a default
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR corrects the docs regarding `past_key_values`. `past_key_values` should always be of type `Tuple[...]`. Fixes #9380 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-02-2021 14:27:30
01-02-2021 14:27:30
transformers
9,380
closed
BartModel's `past_key_values` seems to have different explanations in input_doc and output_doc
## Environment info - `transformers` version: 4.1.1 - Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Bart: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Bart The problem arises in [the document](https://huggingface.co/transformers/model_doc/bart.html) of BartModel and BartForConditionalGeneration ## To reproduce Thank you for kindly answering my question https://github.com/huggingface/transformers/issues/9298. I'm now trying to use Bart in transformers v4.1.1. I'd like to make use of `past_key_values`, which seems to have been the major change of the refactoring https://github.com/huggingface/transformers/pull/8900, but I am a bit confused about the type and shape of it. About the input of the `forward` function, it is explained as: ``` past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers with each tuple having 2 tuples each of which has 2 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) – Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding. ``` About the output, it is explained as: ``` past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) – List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding. ``` I think it will be natural if the input `past_key_values` and the output `past_key_values` have the same format and the output can be used as the input in the next step. If my understanding is correct, the document of the input is generated with `BART_INPUTS_DOCSTRING`, and the output is from `Seq2SeqModelOutput`. ``` @add_start_docstrings_to_model_forward(BART_INPUTS_DOCSTRING) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="facebook/bart-large", output_type=Seq2SeqModelOutput, config_class=_CONFIG_FOR_DOC, ``` I'm sorry if I'm wrong, but maybe the `Seq2SeqModelOutput` documentation hasn't been updated for refactoring? (When I look at the [git log](https://github.com/huggingface/transformers/commits/88ef8893cd649cc2b4adb9885aba88c750118cff/src/transformers/modeling_outputs.py), I cannot find the related commit.) I apologize if the difference in input/output format is due to some intention. If you don't mind, I'd like to ask one more question. In the refactoring of Bart, the `BartDecoderLayer` (renamed from `DecoderLayer`) seems to be updated as below: ``` python # make sure decoder uni-directional self-attn at 1st position and cross-attn at 2nd position. present_key_value = (self_attn_present_key_value, cross_attn_present_key_value) return ( hidden_states, self_attn_weights, present_key_value, cross_attn_weights, ) ``` And in the `BartDecoder`, cache is updated as below: ``` python if use_cache: next_decoder_cache += (present_key_value,) ... next_cache = next_decoder_cache if use_cache else None ``` Does it mean the Bart (and other Seq2Seq Language Models) have both `selt_atten_present_key_value` and `cross_attn_present_key_value` in `past_key_values`? ## Expected behavior Maybe the document of `Seq2SeqModelOutput` needs to be updated. I apologize if the difference in the input/output explanations is due to some intention.
01-02-2021 12:52:26
01-02-2021 12:52:26
Hey @forest1988, Thanks for your issue! You're 100% correct. The docs need to be updated here! The output is actually never a list, it should always be a `Tuple(Tuple(torch.FloatTensor))` - I'll make a PR afterward. And in Bart, `past_key_values` always consists of `selt_attn_present_key_value` and `cross_attn_present_key_value`.<|||||>Hi @patrickvonplaten, Thank you for your quick response to this issue! The update of the docs and your answer to my question -- what `past_key_values` consists of -- are very helpful for me! <|||||>Hi @patrickvonplaten, Excuse me for my frequent questions. I created a new issue https://github.com/huggingface/transformers/issues/9391, in which I ask your help about the `past_key_values` in Bart (Seq2SeqLM) and GPT-2 (CausalLM). I think it is not an error, but a feature request. If you could check it out when you have time, it would be greatly appreciated.
transformers
9,379
closed
Improve documentation coverage for Bertweet
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #9035 @sgugger added docs for Bertweet ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-02-2021 12:39:32
01-02-2021 12:39:32
Thanks @Qbiwan!
transformers
9,378
closed
[Docs] Tokenizer Squad 2.0 example
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #9326 This PR fixes the docs. I ran following code from (https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0) to see whether the Squad tokenization works as expected. Concatenated code from examples: ```python #!/usr/bin/env python3 import json from pathlib import Path from transformers import DistilBertTokenizerFast def read_squad(path): path = Path(path) with open(path, 'rb') as f: squad_dict = json.load(f) contexts = [] questions = [] answers = [] for group in squad_dict['data']: for passage in group['paragraphs']: context = passage['context'] for qa in passage['qas']: question = qa['question'] for answer in qa['answers']: contexts.append(context) questions.append(question) answers.append(answer) return contexts, questions, answers train_contexts, train_questions, train_answers = read_squad('train-v2.0.json') val_contexts, val_questions, val_answers = read_squad('dev-v2.0.json') def add_end_idx(answers, contexts): for answer, context in zip(answers, contexts): gold_text = answer['text'] start_idx = answer['answer_start'] end_idx = start_idx + len(gold_text) # sometimes squad answers are off by a character or two – fix this if context[start_idx:end_idx] == gold_text: answer['answer_end'] = end_idx elif context[start_idx-1:end_idx-1] == gold_text: answer['answer_start'] = start_idx - 1 answer['answer_end'] = end_idx - 1 # When the gold label is off by one character elif context[start_idx-2:end_idx-2] == gold_text: answer['answer_start'] = start_idx - 2 answer['answer_end'] = end_idx - 2 # When the gold label is off by two characters add_end_idx(train_answers, train_contexts) add_end_idx(val_answers, val_contexts) tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True) val_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True) def add_token_positions(encodings, answers): start_positions = [] end_positions = [] for i in range(len(answers)): start_positions.append(encodings.char_to_token(i, answers[i]['answer_start'])) end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1)) # if None, the answer passage has been truncated if start_positions[-1] is None: start_positions[-1] = tokenizer.model_max_length if end_positions[-1] is None: end_positions[-1] = tokenizer.model_max_length encodings.update({'start_positions': start_positions, 'end_positions': end_positions}) add_token_positions(train_encodings, train_answers) add_token_positions(val_encodings, val_answers) ``` Then I checked that the tokenization is correct with this helper function for a couple of ids: ```python def show_answer(idx): print("Tokenized", tokenizer.decode(train_encodings['input_ids'][idx][train_encodings['start_positions'][idx]: train_encodings['end_positions'][idx]])) print("Real", train_answers[idx]['text']) ``` It turns out that the tokenization was almost always incorrect: 1) The standard case should not be: ```python encodings.char_to_token(i, answers[i]['answer_end'] - 1) ``` , but ```python encodings.char_to_token(i, answers[i]['answer_end']) ``` 2) It might happen that `char_to_token` points to a space character which has no corresponding token and is therefore `None`. In this case the character after the space should be used. The fix proposed in the PR corrects this behavior. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-02-2021 12:37:37
01-02-2021 12:37:37
Thank you for the pr. Please also update the documentation here: https://huggingface.co/transformers/custom_datasets.html#qa-squad Line-> end_positions[-1] = encodings.char_to_token(i, answers[i]['answer_end'] + 1) to -> end_positions[-1] = tokenizer.model_max_length<|||||>So what is the state of this issue? What version of processing script should we use?
transformers
9,377
closed
replacing apex.normalization.FusedLayerNorm with torch.nn.LayerNorm
It seems that time has arrived to drop `apex.normalization.FusedLayerNorm` in favor of `torch.nn.LayerNorm` 1. the latter was ported more than a year ago from apex https://github.com/pytorch/pytorch/pull/27634 (around pt-1.4) 2. it's faster than the apex according to my benchmarks https://github.com/pytorch/pytorch/issues/37713#issuecomment-753434842 (**33% faster on rtx-3090!**, 10% faster on gtx-1070) **but note:** this same benchmark run here https://github.com/pytorch/fairseq/issues/2012#issuecomment-622607286 on V100 reports the opposite - that the native is slower (pt-1.5). So it might help to run this very quick benchmark on other cards and compare. In particular if you have access to V100 please report back the findings at this thread: https://github.com/pytorch/pytorch/issues/37713 The main reason for this need is that `apex.normalization.FusedLayerNorm` is buggy (corrupts memory) when it comes to switching devices, which is done a lot under Model Parallel. https://github.com/NVIDIA/apex/issues/1022 With `apex.normalization.FusedLayerNorm` things fail a lot under MP and requires sticking `torch.cuda.set_device(id)` in many many places as a workaround :( Since this overload is used at model's init time it's not possible to not use it under MP as the latter gets activate after model's init. I will use that workaround if you find out that apex is faster still on some important-to-consider hardware. And, of course, in that case please report back to the pytorch team so that they could fix it. Otherwise apex support is pretty much no more and it's just a matter of time before apex will be unusable. The models that need that change are bart/fsmt/prophetnet. @patrickvonplaten, @LysandreJik
01-02-2021 06:13:21
01-02-2021 06:13:21
I'm good with changing to `torch.nn.LayerNorm`. At @stas00 - do you know what the advantage of `apex.normalization.FusedLayerNorm` is supposed to be? Why did we add `apex.normalization.FusedLayerNorm` in the first place?<|||||>Prior to about a year ago, `apex.normalization.FusedLayerNorm` was faster than `torch.nn.LayerNorm`, but then the former got ported to native `torch.nn.LayerNorm`, and now the native appears to be faster - at least the 2 cards I have experimented with. I checked with pt-1.4 .. pt-1.8.dev If you have other than gtx-1070/rtx-3090 cards which I benchmarked with please run that benchmark and see if it stands true for other cards: https://github.com/pytorch/pytorch/issues/37713#issuecomment-753434842 it only takes a few seconds if you have apex installed already. To install apex: ``` git clone https://github.com/NVIDIA/apex cd apex rm -rf build pip install --global-option="--cpp_ext" --global-option="--cuda_ext" . ``` The benchmark measures and reports a total run time, so the smaller the numbers the faster it is. If you do run the benchmarks please post your results at https://github.com/pytorch/pytorch/issues/37713 so that it can be seen whether it's safe to drop `apex.normalization.FusedLayerNorm` based on hard data and not anecdotal info. Thank you.
transformers
9,376
closed
[docs] TFRobertaModel example: last_hidden_states -> last_hidden_state
This is a documentation error on the currently deployed version of https://huggingface.co/transformers/ ### Who can help examples/distillation: @VictorSanh documentation: @sgugger ## Information Model I am using: TFRoberta The problem arises when using: * [x] the official example scripts ## To reproduce Steps to reproduce the behavior: 1. View the code example at https://huggingface.co/transformers/model_doc/roberta.html#tfrobertamodel 2. `last_hidden_states = outputs.last_hidden_states` should be `last_hidden_states = outputs.last_hidden_state` The current incorrect spelling will yield an error. I apologize that I was not able to find that line in the repo, otherwise I would submit a PR.
01-01-2021 17:48:47
01-01-2021 17:48:47
looks like this is what you're looking for: https://github.com/huggingface/transformers/blob/ae333d04b29a25be1a70eaccd6260c294c243c5b/src/transformers/file_utils.py#L842-L855<|||||>Hey @ck37, Thanks for your issue! Yes, this typo should be corrected -> it would be great if you could open a PR :-)
transformers
9,375
closed
Fix Typo
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) I fixed typo in the comment. ## Before submitting - [ V] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-01-2021 13:32:28
01-01-2021 13:32:28
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,374
closed
How do I handle class imbalance for text data when using pretrained models like BERT?
I have a skewed dataset consisting of samples of the form: ``` Category 1 10000 Category 2 2000 Category 3 400 Category 4 300 Category 5 100 ``` The dataset consists of text with data labeled into one of the five categories. I am trying to use the pretrained models like BERT for the classification task but the model fails to identify the categories 3-5 .I have tried to apply class weights in the loss criterion however it doesn't help much although it gives better performance as compared to simple fine tuning of the pretrained models. I have came to know about SMOTE and other methods in order to handle the class imbalance issues . But since most of the transformer models expect the inputs as text which are later tokenized by their respective tokenizers I am not able to do any kind of oversampling . If there is a workaround for this thing I would be interested to know about it.
01-01-2021 12:58:59
01-01-2021 12:58:59
Hi! You could try replacing the CrossEntropy loss with [this Dice Loss](https://github.com/fursovia/self-adj-dice), which may help you with the imbalance issues. In the paper linked in the repo they explain the design process. I have tried it with mixed results, although for my (skewed) dataset, weighting the CrossEntropy loss with the inverse frequency of each category has worked best. Let me know if it works for you 👍🏻 As a last resort, you could try undersampling category 1 to match the second, and maybe combine this with a weighted loss as well.<|||||>Thanks @viantirreau for your suggestions I actually tried to use the dice loss as well as class weights with crossentropy loss and the results I got from the crossentropyloss was actually better than what I am getting with the dice loss , however both of them fails to detect the categories 3-5 . I will try to do undersampling as the last resort however speaking of the class weights I have used the sklearn's compute_class_weight for getting my class_weights as follows: ``` from sklearn.utils.class_weight import compute_class_weight #compute the class weights class_wts = compute_class_weight('balanced', np.unique(train_labels), train_labels) ``` Can you suggest any other workaround other than this strategy , I have came to know that neural networks tends to ignore class weights through an answer on one of the stackexchange sites . <|||||>Hi! I think it shouldn't make any difference, using my method returns exactly the same as Sklearn's `compute_class_weight`, but normalized so as to add up to 1. Using these class counts, > Category 1 10000 > Category 2 2000 > Category 3 400 > Category 4 300 > Category 5 100 I get the following weights for the respective categories `array([0.00608519, 0.03042596, 0.15212982, 0.20283976, 0.60851927])`. Another nice (and extreme) experiment you could try is to over-emphasize the weights for the underrepresented classes, for example using something like `array([0.001, 0.001, 0.001, 0.001, 0.996])`, just as a sanity check to confirm the optimizer learns something about category 5. I would also start testing the model's predictions on the training data first (it should overfit), and only then try to measure its generalization abilities on a held out development set. Maybe your gradients are not backpropagating to the first layers, your learning rate is way too big or you need some warmup steps. Let me know if any of this works :) Good luck!<|||||>Hi , @viantirreau Thanks for your suggestions. I did try it by reducing the class weights for majority classes and emphasizing the weights for minority class say category 5 , I found out that my neural network is still not able to learn anything about those classes , I have tried it with learning rates 1e-5,2e-5,5e-5 and warmup step of 1000 however no improvement is still being made on it . Any optimization strategies for the hyperparameters you can suggest?<|||||>You're welcome! Mmh interesting, what Transformers model are you using? Also, from what pretrained checkpoint are you initializing it? Are you sure there are no warnings like 'missing parameters, initializing from scratch'? I have faced some vanishing gradient problems in the past that manifest as an unexplainable "preference" for a class, so I'd make sure that your gradients are alright. I find [this](https://gist.github.com/viantirreau/ec591a428a5c0112bd8fa84f70968574) code snippet pretty useful to diagnose the gradient flow by plotting its values across each layer. If you use Weights&Biases as a logging tool, you can `watch` the model and create even nicer plots in their dashboard. A warmup strategy was crucial in my experience to eliminate the gradient problems. Also, if you are manually adjusting the attention masks or some of the model inputs, make sure to not pass ignore_index in some/all of the inputs. Some prints will help in making sure that the model inputs are as expected. Another idea I'd test is to completely eliminate your categories 1 and 2 from the training examples, and see if the same phenomenon happens to the most common class by then (should be category 3). Try this alone and see if including the inverse frequency weights in the loss helps in any way. Good luck! Good luck!<|||||>Hi , @viantirreau sorry for the delay in response I haven't received any warnings as such . I am using the bert transformer with bert-base-multilingual-cased as the checkpoint , I was trying to first build a custom model from the final output layer of the BERT model in order to accomodate the class imbalance issue . I haven't tried the weights and biases yet will surely check it out. I will try your other suggestions and will let you know about it. Thanks for your suggestions. <|||||>Hi , @viantirreau sorry for the delay in response .I finally figured out the reason behind the performance degradation it was because I was freezing the base layers and only fine tuning one extra layer which I added to the base model. Since, the model had the data imbalance issue already into it ,it was being biased towards the majority samples . It however performed much better when I unfreezed the base layers , however on the cost of additional gpu training time.<|||||>Hi, @nikhil6041. I'm glad you figured it out. Thanks for reaching back with your experience and solution! 🙌🏻 <|||||>I am actually having the same problem you experienced. I am building a multi-label multi-class classification Bert/distilbert model and encountered the same issue with my 20 classes. Of course the data is imbalanced, and like you I thought I had locked down the base layers but I realized I hadn't and that model performed slight better with the imbalanced data than the locked down model. I could not figure out why other than knowing imbalanced data is a big deal. Unfortunately, the data set I have is extremely small so that is also probably playing a big role. @viantirreau and @nikhil6041, one method I have seen used is a weighted cost function like adacost. Has anyone had any success implementing this with Distilbert? I can provide more details or open a new ticket but this seemed very closely related.<|||||>Hi @johnrodriguez190380 have you tried KL divergence loss function ? Try to use it once. Also there have been certain instances where the usage of weighted cost function doesn't help much. I don't remember the paper which pointed out this thing but I read it somewhere in a stackoverflow answer. If in case the dataset you are using is really small you can try some data augmentation techniques , you can also use this [](https://github.com/makcedward/nlpaug )repo it maybe helpful for u i guess. Let me know if anyone of this serves ur usecase.<|||||>Hi nikhil, how are you? maybe can you share your code/colab/repo to see how you solve the issue? <|||||>Hi @nikhil6041 could you please share the script you wrote for changing the loss? I really appreciate it if you can share it with us!<|||||>Hey @sasu08 and @un-lock-me sorry for the late response I used the sadice loss for this one however it didnt solve my problem fully however there are certain other ways you can possibly try for this thing try to use text augmentations (there are various ways for it like using synonyms ,back translation to name a few , there is one library named nlpaug which might come in handy for both of you , have a look at it. For the sadice loss part you can have a look at my repo [](https://github.com/nikhil6041/OLI-and-Meme-Classification) , here is the link to the nlpaug library [](https://github.com/makcedward/nlpaug). Hope it helps!!<|||||>For the Sadie loss I could not find it in the repo could you please share the link here?<|||||>@un-lock-me Here it is [sadice loss] https://github.com/fursovia/self-adj-dice <|||||>Hi @nikhil6041 can you please share your notebook? <|||||>Hi @pratikchhapolika you can find all my notebooks in this [repo](https://github.com/nikhil6041/OLI-and-Meme-Classification)<|||||>@nikhil6041 thanks for the helpful repo. I was running the code on Google Colab and I used the provided dataset. However, I am getting this error in the Training Loop. TypeError: forward() got an unexpected keyword argument 'token_type_ids' What can be the issue? Thanks
transformers
9,373
closed
how to evaluate models on SUPER_GLUE benchmark?
Hi, I am trying to evaluate models on SUPER_GLUE benchmark. However, I can load SUPER_GLUE dataset from Transformer but I cant find any metrics of this benchmark. Is there any script like _**superglue_metrics.py**_ that can evaluate models on superglue? thanks a lot! :)
01-01-2021 12:14:32
01-01-2021 12:14:32
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,372
closed
Why does datasets get imported when running "from transformers.models.roberta.tokenization_roberta_fast import RobertaTokenizerFast"
When running from transformers.models.roberta.tokenization_roberta_fast import RobertaTokenizerFast I get this warning: ImportWarning: To use `datasets`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`. Why is datasets getting imported, when we import the tokenizer? Thanks! :)
12-31-2020 21:03:49
12-31-2020 21:03:49
hi @vgoklani `datasets` is not required for tokenizers, so it's unlikely to get this error when just importing the tokenizer. Are you running any examples scripts? because those require `datasets` lib<|||||>Hi @patil-suraj Happy New Year! Here is the stack trace: root@b5d80f9670ea:~/src# ipython Python 3.8.5 (default, Sep 4 2020, 07:30:14) Type 'copyright', 'credits' or 'license' for more information IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: from transformers.models.roberta.tokenization_roberta_fast import RobertaTokenizerFast 2021-01-01 12:03:22.340215: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 --------------------------------------------------------------------------- ImportWarning Traceback (most recent call last) <ipython-input-1-2758fed1e79c> in <module> ----> 1 from transformers.models.roberta.tokenization_roberta_fast import RobertaTokenizerFast /opt/conda/lib/python3.8/site-packages/transformers/__init__.py in <module> 32 absl.logging._warn_preinit_stderr = False 33 ---> 34 from . import dependency_versions_check 35 36 # Configuration /opt/conda/lib/python3.8/site-packages/transformers/dependency_versions_check.py in <module> 32 if pkg == "tokenizers": 33 # must be loaded here, or else tqdm check may fail ---> 34 from .file_utils import is_tokenizers_available 35 36 if not is_tokenizers_available(): /opt/conda/lib/python3.8/site-packages/transformers/file_utils.py in <module> 101 102 try: --> 103 import datasets # noqa: F401 104 105 # Check we're not importing a "datasets" directory somewhere /opt/conda/lib/python3.8/site-packages/datasets/__init__.py in <module> 51 52 if int(pyarrow.__version__.split(".")[1]) < 16 and int(pyarrow.__version__.split(".")[0]) == 0: ---> 53 raise ImportWarning( 54 "To use `datasets`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition.\n" 55 "If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`." ImportWarning: To use `datasets`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`. --- An older version of pyarrow was installed, but regardless, this happens immediately after the import. Upgrading pyarrow makes this warning disappear, but regardless, this shouldn't happen. <|||||>cc @sgugger <|||||>This is because transformers imports all optional dependencies (like datasets) during its init. There will be some work to avoid doing that in the coming weeks.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,371
closed
Excessive GPU-GPU communication with GPT2 making multi-GPU training slow?
Summary: on a multi-GPU system, training GPT2 seems to scale poorly unless a very fast GPU-GPU interconnect like NVLink is available. In particular, without NVLink using two GPUs is *slower* than using just one GPU. ## Environment info - `transformers` version: 4.1.1 - Platform: Linux-5.8.0-rc7-custom-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.0.dev20201214+cu110 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No? - Hardware: 2 x NVIDIA RTX 3090 w/NVLink ### Who can help Maybe @LysandreJik or @patrickvonplaten ? ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The script is a pretty basic example of training a medium-size GPT2 from scratch. The script is here: https://panda.moyix.net/~moyix/train_csrc.py The dataset and tokenized vocab: * Dataset: https://panda.moyix.net/~moyix/plainsrc_all.txt.gz (718M, gzipped) * Vocab: https://panda.moyix.net/~moyix/csrc_vocab.tar.gz The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) Training a GPT2 language model on C source code. ## To reproduce Run with only one GPU: `CUDA_VISIBLE_DEVICES=0 python train_csrc.py` Run with two GPUs, NVLink disabled: `NCCL_P2P_DISABLE=1 python train_csrc.py` Run with two GPUs and NVLink enabled: `python train_csrc.py` Here is some benchmarking I did with my dataset on transformers 3.3.1 and 4.1.1 (note the difference in ETA is just because 3.3.1 only seems to report the ETA for the current epoch): Version|NVLINK|GPUs|ETA|Perf --------|--------|-----|-----|----- 4.1.1 | Yes | 2GPU | 419:52:28 | 1.94it/s 4.1.1 | No | 2GPU | 1025:06:27 | 1.26s/it 4.1.1 | N/A | 1GPU | 599:14:57 | 2.72it/s 3.3.1 | Yes | 2GPU | 83:46:51 | 1.94it/s 3.3.1 | No | 2GPU | 204:54:22 | 1.26s/it 3.3.1 | N/A | 1GPU | 119:02:34 | 2.73it/s You can see that using two GPUs is actually slower than using a single GPU, unless NVLink is available (599 hours for 1 GPU vs 1025 hours for two GPUs). So presumably there is a large amount of GPU-GPU communication going on? ## Expected behavior Scaling should be roughly linear with the number of GPUs. Unfortunately I am not very familiar with the implementation details of GPT2 in Huggingface, but others report roughly linear scaling with Transformer models like BERT so it should work here as well: https://towardsdatascience.com/training-bert-at-a-university-eedcf940c754 Although I have a system with NVLink at home, this issue is still affecting me because I would like to be able to run this on the university HPC cluster, where most nodes do not have NVLink.
12-31-2020 17:47:12
12-31-2020 17:47:12
Not an answer to your issue/question, but have you tried running in distributed training (DDP), which is the recommended way of running over multiple GPUs: https://github.com/huggingface/transformers/tree/master/examples#distributed-training-and-mixed-precision Would be curious to see the same with/without NVLink experiment there.<|||||>Hmm, I don't have much experience using torch.distributed. I tried just running the existing script with `python -m torch.distributed.launch --nproc_per_node 2 train.py`, but it runs out of GPU memory almost immediately, so I assume I'm doing something wrong. If you have a link to some documentation that explains how to set up the training script so that it can be used with torch.distributed, I can give that a try.<|||||>The command you posted "should" work. @sgugger might have links to better content when he's back, but the PyTorch tutorials are pretty good: https://pytorch.org/tutorials/beginner/dist_overview.html#data-parallel-training Your initial experiment is using `DataParallel` (not `DistributedDataParallel`) under the hood.<|||||>OK, I got around to spending some more time with this today. I realized that the `run_language_modeling.py` script can do everything my script was doing, and it uses DDP by default (Note: looking at the most recent version on git, I see that `run_language_modeling.py` has been replaced by `run_clm.py`. However, after trying to upgrade transformers to that version, it seems to no longer use the GPU for reasons I don't have time to debug.). So now I'm just using that, with: ``` python -m torch.distributed.launch --nproc_per_node 2 \ ~/git/transformers/examples/language-modeling/run_language_modeling.py \ --model_type gpt2 \ --config_name ./csrc_config \ --tokenizer_name ./csrc_tokenizer \ --fp16 --fp16_opt_level O3 \ --do_train --output_dir csrc_output \ --per_device_train_batch_size 4 \ --train_data_file plainsrc_all.txt --block_size 128 ``` For single GPU I drop the `torch.distributed.launch` and use `CUDA_VISIBLE_DEVICES=1`, to disable NVLINK I use `NCCL_P2P_DISABLE=1` as before. The `--block_size 128` argument is to match the default from my training script (without it I run out of GPU RAM). Results: Model | Block Size | GPUs | NVLINK | ETA | Perf ------|------------|------|--------|-----|----- Small | 512 | 2GPU | No | 17:08:12 | 4.75it/s Small | 512 | 2GPU | Yes | 10:24:20 | 7.79it/s Small | 512 | 1GPU | N/A | 18:37:17 | 8.74it/s Medium | 512 | 2GPU | No | 43:07:49 | 1.89it/s Medium | 512 | 2GPU | Yes | 26:19:09 | 3.09it/s Medium | 512 | 1GPU | N/A | 45:36:37 | 3.57it/s Small | 128 | 2GPU | No | 48:12:05 | 6.75it/s Small | 128 | 2GPU | Yes | 21:26:31 | 15.17it/s Small | 128 | 1GPU | N/A | 30:54:41 | 21.06it/s Medium | 128 | 2GPU | No | 118:43:09 | 2.74it/s Medium | 128 | 2GPU | Yes | 51:55:58 | 6.27it/s Medium | 128 | 1GPU | N/A | 74:02:16 | 8.79it/s Large | 128 | 2GPU | No | 239:19:44 | 1.36it/s Large | 128 | 2GPU | Yes | 102:17:18 | 3.18it/s Large | 128 | 1GPU | N/A | 143:34:42 | 4.54it/s So the general observation is that for block size 512, two GPUs without NVLink are about the same performance as a single GPU. For block size 128, two GPUs without NVLink are typically quite a bit *slower* than a single GPU. It doesn't seem like DistributedDataParallel helps with this issue, in other words. <|||||>I think @sgugger has experience with multi-GPU, and works on the example scripts, pinging him!<|||||>A friend was linking me to this issue. Thank you for your work on this benchmark! It is some interesting data. I still believe the poor performance could be a hardware issue though. As far as I know, RTX 3090 GPUs have peer-to-peer access disable, or in other words, you cannot transfer memory from GPU to GPU on these GPUs. All data is first routed through the CPU, which is often slow because the CPU buffers are not pinned, meaning that memory transfers are _synchronous_. So in my eyes, slow performance without NVLink is a hardware issue in this case. It would be curious, though, if these numbers would be similar for peer-to-peer enabled GPUs. Do you have access to such a GPU?<|||||>You're thinking of something like P2P over PCIe? You're right that NVIDIA has disabled that for the 3090s. The only other hardware I have access to is our HPC cluster, which has RTX8000s and V100s (non-NVLINKed); I believe both show similar slowdowns. One thing I have been looking into is whether using something like DeepSpeed will help. I got their Megatron-LM example working and it does much better at scaling to two at least GPUs without NVLINK using the 1-bit Adam optimizer. I'm still waiting for my HPC job to get scheduled to confirm that it scales well there too. If that works then presumably something like what's being done for the t5-3b model here would help? https://github.com/huggingface/transformers/issues/8771<|||||>If you confirm you have the same results for the RTX 8000 that would rule out any GPU issue. It could still be a hardware issue with PCIe lanes. There is a bandwidth test I believe among the NVIDIA samples that come with CUDA with which you can test the available bandwidth to/from GPUs. If this shows good numbers it should be purely an issue of software or network architecture.<|||||>OK, I'll give this a try. Our HPC cluster is a bit busy so it may be a while before I can get a slot on the RTX 8000 nodes.<|||||>I managed to get some time on a node with 4x V100s. For the Large model, it gets 3.83s/it with an ETA of 1248:01:43 (!). Here's the output of p2pBandwidthLatencyTest on the V100 system: ``` [bd52@gv02 p2pBandwidthLatencyTest]$ ./p2pBandwidthLatencyTest [P2P (Peer-to-Peer) GPU Bandwidth Latency Test] Device: 0, Tesla V100-PCIE-32GB, pciBusID: 6, pciDeviceID: 0, pciDomainID:0 Device: 1, Tesla V100-PCIE-32GB, pciBusID: 2f, pciDeviceID: 0, pciDomainID:0 Device: 2, Tesla V100-PCIE-32GB, pciBusID: 86, pciDeviceID: 0, pciDomainID:0 Device: 3, Tesla V100-PCIE-32GB, pciBusID: d8, pciDeviceID: 0, pciDomainID:0 Device=0 CAN Access Peer Device=1 Device=0 CAN Access Peer Device=2 Device=0 CAN Access Peer Device=3 Device=1 CAN Access Peer Device=0 Device=1 CAN Access Peer Device=2 Device=1 CAN Access Peer Device=3 Device=2 CAN Access Peer Device=0 Device=2 CAN Access Peer Device=1 Device=2 CAN Access Peer Device=3 Device=3 CAN Access Peer Device=0 Device=3 CAN Access Peer Device=1 Device=3 CAN Access Peer Device=2 ***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure. So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases. P2P Connectivity Matrix D\D 0 1 2 3 0 1 1 1 1 1 1 1 1 1 2 1 1 1 1 3 1 1 1 1 Unidirectional P2P=Disabled Bandwidth Matrix (GB/s) D\D 0 1 2 3 0 768.57 11.42 11.52 11.53 1 11.39 770.46 11.50 11.53 2 11.42 11.43 771.22 11.45 3 11.42 11.43 11.44 769.70 Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s) D\D 0 1 2 3 0 767.06 9.93 9.68 9.49 1 9.93 769.33 9.33 9.50 2 9.87 9.35 769.70 10.05 3 9.66 9.68 9.92 770.08 Bidirectional P2P=Disabled Bandwidth Matrix (GB/s) D\D 0 1 2 3 0 771.22 15.98 16.04 16.16 1 16.00 773.51 16.11 16.07 2 15.90 15.99 772.75 15.83 3 16.05 16.01 15.85 772.55 Bidirectional P2P=Enabled Bandwidth Matrix (GB/s) D\D 0 1 2 3 0 770.84 18.72 18.41 18.07 1 18.52 772.94 18.82 18.30 2 18.41 18.16 771.80 19.13 3 18.40 17.99 18.94 771.22 P2P=Disabled Latency Matrix (us) GPU 0 1 2 3 0 1.89 14.77 14.42 14.59 1 14.52 1.91 15.50 15.50 2 15.53 15.42 1.87 14.44 3 14.76 14.71 14.51 1.82 CPU 0 1 2 3 0 2.52 8.33 8.61 8.55 1 8.20 2.49 8.50 8.49 2 8.30 8.29 2.61 8.69 3 8.41 8.36 8.74 2.56 P2P=Enabled Latency (P2P Writes) Matrix (us) GPU 0 1 2 3 0 1.86 1.60 1.65 1.64 1 1.59 1.91 1.64 1.65 2 1.65 1.63 1.88 1.58 3 1.65 1.64 1.59 1.82 CPU 0 1 2 3 0 2.51 2.05 2.02 2.02 1 2.14 2.54 2.04 2.02 2 2.28 2.18 2.61 2.18 3 2.32 2.19 2.24 2.73 NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. ``` And for comparison, here's the dual 3090 w/NVLINK system: ``` [P2P (Peer-to-Peer) GPU Bandwidth Latency Test] Device: 0, GeForce RTX 3090, pciBusID: 1, pciDeviceID: 0, pciDomainID:0 Device: 1, GeForce RTX 3090, pciBusID: 21, pciDeviceID: 0, pciDomainID:0 Device=0 CAN Access Peer Device=1 Device=1 CAN Access Peer Device=0 ***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure. So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases. P2P Connectivity Matrix D\D 0 1 0 1 1 1 1 1 Unidirectional P2P=Disabled Bandwidth Matrix (GB/s) D\D 0 1 0 831.56 11.25 1 11.33 831.12 Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s) D\D 0 1 0 810.85 52.77 1 52.85 832.89 Bidirectional P2P=Disabled Bandwidth Matrix (GB/s) D\D 0 1 0 812.31 16.55 1 16.75 838.03 Bidirectional P2P=Enabled Bandwidth Matrix (GB/s) D\D 0 1 0 821.29 101.41 1 101.80 835.34 P2P=Disabled Latency Matrix (us) GPU 0 1 0 1.59 33.13 1 20.55 1.48 CPU 0 1 0 2.89 8.85 1 8.81 2.85 P2P=Enabled Latency (P2P Writes) Matrix (us) GPU 0 1 0 1.59 1.43 1 1.40 1.47 CPU 0 1 0 2.93 2.45 1 2.39 2.90 ```<|||||>Thank you - these data are very valuable! It also shows that no hardware problem exists. It seems you could confirm poor performance on the V100 which makes it very likely that you can also reproduce performance issues with the RTX 8000. With that, it seems the only option is that it is an issue with the combination of parallelism and network architecture. <|||||>Great benchmarks! Thank you for sharing the data, @moyix Do you have the same benchmarks for V100s too - just one set is enough (1 vs 2). Also, why are you running comparison benchmarks on such huge number of items? Running enough items so that runtime is around a few minutes should be plenty to see the difference. Or is it that you were aborting these early and just recording the projected ETA and it/s from tqdm? `e.g. --max_steps 1000` Here are some ideas that may address your issue 1. If I understand things right 3090 won't work at full capacity until we get pytorch w/ cuda-11.2 https://github.com/pytorch/pytorch/issues/50232 I don't know the nuances yet, but could it be that the communication channel is limited with cuda-11.0? That's why I wanted to see the results from VT100 2. In one place it was suggested to check how your GPUs are inter-connected with help of: ``` nvidia-smi topo -m ``` that's do this check with NVLink disconnected. 3. Also are sure your both GPUs running on the same speed PCIx (e.g. 8x if it's a consumer MB)? It must be, but just checking. I suppose doing a single GPU test on the other GPU would show if it's somehow on a slow PCIx slot. But I'd just test to rule that out. Should you get a slower outcome doing the same test on the 2nd gpu would explain the situation. <|||||>OK, so here is my benchmark with the same tool. **edit**: my initial benchmark had a bug in it as pointed out by @sgugger as one has to tweak `--max_steps` if changed to more gpus - I'm proposing to change that and have a way to have a fixed dataset truncation regardless of the number of gpus used. https://github.com/huggingface/transformers/issues/9801 So for 1 gpu, I had to double `--max_steps` to get the same number of items. The rest of this comment has been updated to reflect the corrected state: Hardware 2x TITAN RTX 24GB each + NVlink |type| time secs | |----|-----| | 1: | 204 | | 2:DP w/ NVlink| 110 | | 2:DDP w/ NVlink| 101 | | 2:DDP w/o NVlink | 131 | I get the same bus report w/ and w/o NCCL_P2P_DISABLE=1 - I don't think `nvidia-smi` respects this env var: ``` NCCL_P2P_DISABLE=1 nvidia-smi topo -m GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X NV2 0-23 N/A GPU1 NV2 X 0-23 N/A ``` but clearly the runtime is much slower w/o the NVlink as the benchmark shows, so pytorch/cuda does respect it. Analysis: 1. DP is ~10% slower than DDP w/ NVlink, but ~15% faster than DDP w/o NVlink 2. DDP w/ NVLink doubles the speed of single gpu, so the communication overheard is almost nill in this particular experiment Here is the full benchmark code and outputs: ``` # 1 gpu rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0 python run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir \ /tmp/test-clm --per_device_train_batch_size 4 --max_steps 400 {'train_runtime': 204.8202, 'train_samples_per_second': 1.953, 'epoch': 0.69} # DP rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir \ /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 110.5948, 'train_samples_per_second': 1.808, 'epoch': 0.69} # DDP rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node 2 \ run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} # DDP w/o NVlink rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch \ --nproc_per_node 2 run_clm.py --model_name_or_path gpt2 --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm \ --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} ```<|||||>Yes, apologies for the confusion; the ETA numbers above are from aborting early (after a few minutes) and noting the ETA. I actually did compile PyTorch from source with CUDA 11.2 and it doesn't seem to have changed the results (although I don't know if there are further changes PyTorch will make to take full advantage of 11.2). Your benchmark code is much more self-contained than mine, so I will give your benchmarks a shot with the RTX8000 and V100 nodes on our cluster, but it will probably be a few days before I can get time there as the ICML deadline is very close :) Here's nvidia-smi -m topo for the 3090 machine: ``` nvidia-smi topo -m GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X NV4 0-31 N/A GPU1 NV4 X 0-31 N/A ```<|||||>Note that the timing compare 200 training steps, so the numbers you reported wrong @stas00 in the sense that 2 GPUs have covered 400 samples instead of 200. Training on the full dataset would therefore go twice as fast as with one GPU.<|||||>This is correct - that my report was incorrect. Thank you for validating my concern in https://github.com/huggingface/transformers/issues/9801, @sgugger That's why I'm asking for a less confusing way to truncate the dataset. I need to find an easy-way to do it so I don't have to be in full thinking capacity if I do it late at night which was the case last night. I will revisit my benchmark with corrections hopefully today. But it doesn't change the fact that nvlink gives 30% faster performance. <|||||>> Yes, apologies for the confusion; the ETA numbers above are from aborting early (after a few minutes) and noting the ETA. That's what I guessed - I am glad you didn't waste all that electricity to run these to completion! It was a smart move, since you waited a few minutes. > I actually did compile PyTorch from source with CUDA 11.2 and it doesn't seem to have changed the results (although I don't know if there are further changes PyTorch will make to take full advantage of 11.2). Oh, thank you for validating that! Building pytorch from source is hard! Hat off to you! Yes, we don't know whether everything has been put in place for 11.2 support. > Your benchmark code is much more self-contained than mine, so I will give your benchmarks a shot with the RTX8000 and V100 nodes on our cluster, but it will probably be a few days before I can get time there as the ICML deadline is very close :) please note that I corrected a mistake in my benchmark as kindly pointed out by @sgugger: https://github.com/huggingface/transformers/issues/9371#issuecomment-767323420 > Here's nvidia-smi -m topo for the 3090 machine: > > ``` > nvidia-smi topo -m > GPU0 GPU1 CPU Affinity NUMA Affinity > GPU0 X NV4 0-31 N/A > GPU1 NV4 X 0-31 N/A > ``` Looks very similar. Do you know what exactly: ``` NV# = Connection traversing a bonded set of # NVLinks ``` means? is NV4 better than NV2? since I get NV2. Why do you have 4? As I can see you only have 2 gpus. <|||||>According to [this table](https://docs.nvidia.com/datacenter/nvtags/0.1/nvtags-user-guide/index.html#supported-link-names) NV4 means "Connection traversing a bonded set of 4 NVLinks". There are some more details in the [GA102 whitepaper](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf): > GA102 GPUs utilize NVIDIA’s third-generation NVLink interface, which includes four x4 links, with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink. <|||||>Super! Thank you for that insight, @moyix! I started compiling performance/scalability notes here: https://github.com/huggingface/transformers/issues/9824 I summarized the useful insights from this thread. If you get a chance to validate the GPU inter-connectivity section that would be great! And if you have other insights to contribute I'm all ears. If you don't have time/inspiration to write something complete even a stab would be great and then over time we will fill it out with details and benchmarks. The idea is to discuss in-depth the different hardware/software nuances to speed up training and fit larger models. Thank you!<|||||>Very nice, I will take a look at it! While I am waiting for HPC time, I ran your benchmark script on the 3090 system while varying two parameters: the model size (gpt2, gpt2-medium, and gpt2-large) and the block size (128, 256, 512). The script: ``` for MODEL in gpt2 gpt2-medium gpt2-large; do for BLOCK_SIZE in 128 256 512 ; do # Skip gpt2-large at block size 512 due to memory constraints if [ $MODEL = "gpt2-large" ] && [ $BLOCK_SIZE -eq 512 ] ; then continue ; fi # 1 gpu rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0 python run_clm.py --model_name_or_path $MODEL \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir \ /tmp/test-clm --per_device_train_batch_size 4 --max_steps 400 --block_size $BLOCK_SIZE 2>&1 > /tmp/clm_bench.log result=$(grep train_runtime /tmp/clm_bench.log) echo $MODEL $BLOCK_SIZE "1GPU" $result >> clm_bench_results.log # DP rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python run_clm.py --model_name_or_path $MODEL \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir \ /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 --block_size $BLOCK_SIZE 2>&1 > /tmp/clm_bench.log result=$(grep train_runtime /tmp/clm_bench.log) echo $MODEL $BLOCK_SIZE "DP" $result >> clm_bench_results.log # DDP rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node 2 \ run_clm.py --model_name_or_path $MODEL --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 --block_size $BLOCK_SIZE 2>&1 > /tmp/clm_bench.log result=$(grep train_runtime /tmp/clm_bench.log) echo $MODEL $BLOCK_SIZE "DDP" $result >> clm_bench_results.log # DDP w/o NVlink rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch \ --nproc_per_node 2 run_clm.py --model_name_or_path $MODEL --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm \ --per_device_train_batch_size 4 --max_steps 200 --block_size $BLOCK_SIZE 2>&1 > /tmp/clm_bench.log result=$(grep train_runtime /tmp/clm_bench.log) echo $MODEL $BLOCK_SIZE "DDP_no_NV" $result >> clm_bench_results.log done done ``` And the results: ``` gpt2 128 1GPU {'train_runtime': 19.5621, 'train_samples_per_second': 20.448, 'epoch': 0.09} gpt2 128 DP {'train_runtime': 16.6426, 'train_samples_per_second': 12.017, 'epoch': 0.09} gpt2 128 DDP {'train_runtime': 13.5368, 'train_samples_per_second': 14.775, 'epoch': 0.09} gpt2 128 DDP_no_NV {'train_runtime': 30.0181, 'train_samples_per_second': 6.663, 'epoch': 0.09} gpt2 256 1GPU {'train_runtime': 30.423, 'train_samples_per_second': 13.148, 'epoch': 0.17} gpt2 256 DP {'train_runtime': 22.6101, 'train_samples_per_second': 8.846, 'epoch': 0.17} gpt2 256 DDP {'train_runtime': 18.6943, 'train_samples_per_second': 10.698, 'epoch': 0.17} gpt2 256 DDP_no_NV {'train_runtime': 35.4208, 'train_samples_per_second': 5.646, 'epoch': 0.17} gpt2 512 1GPU {'train_runtime': 58.0856, 'train_samples_per_second': 6.886, 'epoch': 0.34} gpt2 512 DP {'train_runtime': 37.6376, 'train_samples_per_second': 5.314, 'epoch': 0.34} gpt2 512 DDP {'train_runtime': 32.3616, 'train_samples_per_second': 6.18, 'epoch': 0.34} gpt2 512 DDP_no_NV {'train_runtime': 49.1999, 'train_samples_per_second': 4.065, 'epoch': 0.34} gpt2-medium 128 1GPU {'train_runtime': 49.3823, 'train_samples_per_second': 8.1, 'epoch': 0.09} gpt2-medium 128 DP {'train_runtime': 40.5947, 'train_samples_per_second': 4.927, 'epoch': 0.09} gpt2-medium 128 DDP {'train_runtime': 33.4365, 'train_samples_per_second': 5.981, 'epoch': 0.09} gpt2-medium 128 DDP_no_NV {'train_runtime': 74.9924, 'train_samples_per_second': 2.667, 'epoch': 0.09} gpt2-medium 256 1GPU {'train_runtime': 79.6724, 'train_samples_per_second': 5.021, 'epoch': 0.17} gpt2-medium 256 DP {'train_runtime': 56.0446, 'train_samples_per_second': 3.569, 'epoch': 0.17} gpt2-medium 256 DDP {'train_runtime': 47.7543, 'train_samples_per_second': 4.188, 'epoch': 0.17} gpt2-medium 256 DDP_no_NV {'train_runtime': 89.3616, 'train_samples_per_second': 2.238, 'epoch': 0.17} gpt2-medium 512 1GPU {'train_runtime': 152.6255, 'train_samples_per_second': 2.621, 'epoch': 0.34} gpt2-medium 512 DP {'train_runtime': 92.4563, 'train_samples_per_second': 2.163, 'epoch': 0.34} gpt2-medium 512 DDP {'train_runtime': 82.1935, 'train_samples_per_second': 2.433, 'epoch': 0.34} gpt2-medium 512 DDP_no_NV {'train_runtime': 124.1163, 'train_samples_per_second': 1.611, 'epoch': 0.34} gpt2-large 128 1GPU {'train_runtime': 98.5939, 'train_samples_per_second': 4.057, 'epoch': 0.09} gpt2-large 128 DP {'train_runtime': 79.2193, 'train_samples_per_second': 2.525, 'epoch': 0.09} gpt2-large 128 DDP {'train_runtime': 65.7918, 'train_samples_per_second': 3.04, 'epoch': 0.09} gpt2-large 128 DDP_no_NV {'train_runtime': 152.2178, 'train_samples_per_second': 1.314, 'epoch': 0.09} gpt2-large 256 1GPU {'train_runtime': 154.5437, 'train_samples_per_second': 2.588, 'epoch': 0.17} gpt2-large 256 DP {'train_runtime': 106.7075, 'train_samples_per_second': 1.874, 'epoch': 0.17} gpt2-large 256 DDP [out of memory] gpt2-large 256 DDP_no_NV [out of memory] gpt2-large 512 1GPU [out of memory] gpt2-large 512 DP [out of memory] gpt2-large 512 DDP [out of memory] gpt2-large 152 DDP_no_NV [out of memory] ``` One thing that I find interesting is that the behavior I originally observed where training on a single GPU could be slower than on multiple GPUs without NVLink only seems to be true for small block sizes like 128 or (sometimes) 256. So my hypothesis is that with smaller block sizes it is effectively using smaller batches and therefore synchronizing between GPUs more often? As soon as I can get some time on our HPC I can update this with numbers for the 4xRTX8000 and the 4xV100, although the NVLink rows will no longer be applicable (since I don't have access to a machine with those cards in NVLink/NVSwitch configuration).<|||||>Awesome! Thank you for more benchmarks, @moyix Let's apply some magic to your log: ``` perl -lne 'BEGIN{ print qq[|model|block|type|runtime|sample/sec|]; print "|-" x 5, "|"} $d=qr/([\d\.]+)/; m|^(\S+) $d (\S+) ..train_runtime.. $d, .train_samples_per_second.. $d| && print qq[|$1|$2|$3|$4|$5|]' log.txt ``` but let's round it up to make reading easier: ``` perl -lne 'BEGIN{ print qq[|model|block|type|runtime|sample/sec|]; print "|-" x 5, "|"} $d=qr/([\d\.]+)/; m|^(\S+) $d (\S+) ..train_runtime.. $d, .train_samples_per_second.. $d| && print qq[|$1|$2|$3|] . int($4). "|". sprintf("%0.1f", $5)."|"' log.txt ``` |model|block|type|runtime|sample/sec| |-|-|-|-|-| |gpt2|128|1GPU|19|20.4| |gpt2|128|DP|16|12.0| |gpt2|128|DDP|13|14.8| |gpt2|128|DDP_no_NV|30|6.7| |gpt2|256|1GPU|30|13.1| |gpt2|256|DP|22|8.8| |gpt2|256|DDP|18|10.7| |gpt2|256|DDP_no_NV|35|5.6| |gpt2|512|1GPU|58|6.9| |gpt2|512|DP|37|5.3| |gpt2|512|DDP|32|6.2| |gpt2|512|DDP_no_NV|49|4.1| |gpt2-medium|128|1GPU|49|8.1| |gpt2-medium|128|DP|40|4.9| |gpt2-medium|128|DDP|33|6.0| |gpt2-medium|128|DDP_no_NV|74|2.7| |gpt2-medium|256|1GPU|79|5.0| |gpt2-medium|256|DP|56|3.6| |gpt2-medium|256|DDP|47|4.2| |gpt2-medium|256|DDP_no_NV|89|2.2| |gpt2-medium|512|1GPU|152|2.6| |gpt2-medium|512|DP|92|2.2| |gpt2-medium|512|DDP|82|2.4| |gpt2-medium|512|DDP_no_NV|124|1.6| |gpt2-large|128|1GPU|98|4.1| |gpt2-large|128|DP|79|2.5| |gpt2-large|128|DDP|65|3.0| |gpt2-large|128|DDP_no_NV|152|1.3| |gpt2-large|256|1GPU|154|2.6| |gpt2-large|256|DP|106|1.9| Doing a quick scan it's clear that as the model grows in size and the block in its size they results start to diverge more and more, though proportions don't change much. Probably could pipe this to convert into relative sizes and then it'd very clear. > my hypothesis is that with smaller block sizes it is effectively using smaller batches and therefore synchronizing between GPUs more often? It certainly has less data to communicate to the other gpus with smaller blocks<|||||>ok, a quick hack to add ratios relative to 1gpu, so now it's easier to see the comparison. ``` perl -lne 'BEGIN{ print qq[|model|block|type|runtime|sample/sec|ratios]; print "|-" x 6, "|"} $d=qr/([\d\.]+)/; if (m|^(\S+) $d (\S+) ..train_runtime.. $d, .train_samples_per_second.. $d|) {if($3=="1GPU") {$s=$4; print "| " x 6, "|"}; print qq[|$1|$2|$3|] . int($4). "|". sprintf("%0.1f", $5)."|".sprintf("%0.1f", $4/$s)."|"}' log.txt ``` So I added a new column runtime `ratios` and each 4 rows get recalculated wrt to their first runtime entry with 1gpu. edit: someone asked to explain the ratio and why the runtime is faster for DDP, but samples per second is smaller. Here is a puzzle to solve: 1. one cake eater eats the cake at 60 sec/cake 2. now a second cake eater joins and who eats at the same speed as the first one, but now after every bite they have to shout "ML rocks", which slows down both of them, so they are now eating 20% slower than when alone Will one cake eater finish the cake faster than two of them? (the answer is after the table, so you don't see it right away) |model|block|type|runtime|sample/sec|ratios |-|-|-|-|-|-| | | | | | | | |gpt2|128|1GPU|19|20.4|1.0| |gpt2|128|DP|16|12.0|0.9| |gpt2|128|DDP|13|14.8|0.7| |gpt2|128|DDP_no_NV|30|6.7|1.5| | | | | | | | |gpt2|256|1GPU|30|13.1|1.0| |gpt2|256|DP|22|8.8|0.7| |gpt2|256|DDP|18|10.7|0.6| |gpt2|256|DDP_no_NV|35|5.6|1.2| | | | | | | | |gpt2|512|1GPU|58|6.9|1.0| |gpt2|512|DP|37|5.3|0.6| |gpt2|512|DDP|32|6.2|0.6| |gpt2|512|DDP_no_NV|49|4.1|0.8| | | | | | | | |gpt2-medium|128|1GPU|49|8.1|1.0| |gpt2-medium|128|DP|40|4.9|0.8| |gpt2-medium|128|DDP|33|6.0|0.7| |gpt2-medium|128|DDP_no_NV|74|2.7|1.5| | | | | | | | |gpt2-medium|256|1GPU|79|5.0|1.0| |gpt2-medium|256|DP|56|3.6|0.7| |gpt2-medium|256|DDP|47|4.2|0.6| |gpt2-medium|256|DDP_no_NV|89|2.2|1.1| | | | | | | | |gpt2-medium|512|1GPU|152|2.6|1.0| |gpt2-medium|512|DP|92|2.2|0.6| |gpt2-medium|512|DDP|82|2.4|0.5| |gpt2-medium|512|DDP_no_NV|124|1.6|0.8| | | | | | | | |gpt2-large|128|1GPU|98|4.1|1.0| |gpt2-large|128|DP|79|2.5|0.8| |gpt2-large|128|DDP|65|3.0|0.7| |gpt2-large|128|DDP_no_NV|152|1.3|1.5| | | | | | | | |gpt2-large|256|1GPU|154|2.6|1.0| |gpt2-large|256|DP|106|1.9|0.7| and the answer to the puzzle posted at the beginning of this comment: 2 cake eaters will eat the cake faster together despite the slowdown, because they only have half a cake to finish each! Same here, while each of the GPUs in the DDP assembly performs slower due to the gradient syncing, but because it has to consume half the samples, overall the assembly will train faster. Further, this benchmark is just for 2 GPUs So going from 1GPU to 2GPUs, you create the overhead, and so you get some loss in performance, and some gain When you go from 2GPUs to 4GPUs (on the same node), it's pure performance doubling. i.e. 4GPUs will perform disproportionally faster than 2GPUs over 1 GPU. - 1 GPU has no inter-gpu communication to do - 2+ gpus have to average gradients so they add this overhead, but then they can parallelize the processing so the overhead becomes almost negligible as the number of GPUs grows The next problem is once you outgrow a single node. So the next issue is inter-node connects, which can be blazing fast (Infiniband) or super-slow (ethernet hub). So to scale from 8GPUs to 10 (for 8-gpu node), you first lose performance, because now the inter-node connection is the slow component that slows everything down. But as you add more nodes, again that overhead becomes less and less significant. Of course when working with multi-node one often uses other parallelization techniques than DDP, so it's PP or TP (https://huggingface.co/transformers/parallelism.html#concepts), and there one generally performs TP only inside a node, and PP and DP over nodes. **It'd be amazing if someone re-did this table for 1, 2, 4 gpus, then 1, 2, 4 nodes.**<|||||>OK, now we have some extensive benchmarks for the RTX8000 machine. This machine does not have NVLink, but it apparently can do P2P GPU-GPU communication via the PCI bus. However, this seems to be quite slow – slower, in fact, than disabling P2P altogether. Here's `nvidia-smi topo -m`: ``` GPU0 GPU1 GPU2 GPU3 mlx5_0 CPU Affinity NUMA Affinity GPU0 X SYS SYS SYS SYS 0-7 0-1 GPU1 SYS X SYS SYS SYS 0-7 0-1 GPU2 SYS SYS X SYS SYS 0-7 0-1 GPU3 SYS SYS SYS X SYS 0-7 0-1 mlx5_0 SYS SYS SYS SYS X Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks ``` I used the script from before (slightly expanded) and set `max-steps` to 800 for the single GPU case, 400 for two GPUs, and 200 for 4 GPUs. Here are the benchmarks (long!): |model|block|type|runtime|sample/sec|ratios |-|-|-|-|-|-| | | | | | | | |gpt2|128|1GPU|67|11.9|1.0| |gpt2|128|DP_2GPU|530|0.8|7.9| |gpt2|128|DDP_2GPU|350|1.1|5.2| |gpt2|128|DDP_no_P2P_2GPU|119|3.3|1.8| |gpt2|128|DP_4GPU|243|0.8|3.6| |gpt2|128|DDP_4GPU|159|1.3|2.4| |gpt2|128|DDP_no_P2P_4GPU|88|2.3|1.3| | | | | | | | |gpt2|256|1GPU|113|7.0|1.0| |gpt2|256|DP_2GPU|582|0.7|5.1| |gpt2|256|DDP_2GPU|376|1.1|3.3| |gpt2|256|DDP_no_P2P_2GPU|142|2.8|1.3| |gpt2|256|DP_4GPU|313|0.6|2.8| |gpt2|256|DDP_4GPU|174|1.1|1.5| |gpt2|256|DDP_no_P2P_4GPU|102|1.9|0.9| | | | | | | | |gpt2|512|1GPU|215|3.7|1.0| |gpt2|512|DP_2GPU|694|0.6|3.2| |gpt2|512|DDP_2GPU|426|0.9|2.0| |gpt2|512|DDP_no_P2P_2GPU|192|2.1|0.9| |gpt2|512|DP_4GPU|454|0.4|2.1| |gpt2|512|DDP_4GPU|201|1.0|0.9| |gpt2|512|DDP_no_P2P_4GPU|124|1.6|0.6| | | | | | | | |gpt2-medium|128|1GPU|183|4.4|1.0| |gpt2-medium|128|DP_2GPU|1476|0.3|8.0| |gpt2-medium|128|DDP_2GPU|863|0.5|4.7| |gpt2-medium|128|DDP_no_P2P_2GPU|280|1.4|1.5| |gpt2-medium|128|DP_4GPU|653|0.3|3.6| |gpt2-medium|128|DDP_4GPU|375|0.5|2.0| |gpt2-medium|128|DDP_no_P2P_4GPU|193|1.0|1.1| | | | | | | | |gpt2-medium|256|1GPU|306|2.6|1.0| |gpt2-medium|256|DP_2GPU|1600|0.2|5.2| |gpt2-medium|256|DDP_2GPU|919|0.4|3.0| |gpt2-medium|256|DDP_no_P2P_2GPU|339|1.2|1.1| |gpt2-medium|256|DP_4GPU|814|0.2|2.7| |gpt2-medium|256|DDP_4GPU|401|0.5|1.3| |gpt2-medium|256|DDP_no_P2P_4GPU|218|0.9|0.7| | | | | | | | |gpt2-medium|512|1GPU|573|1.4|1.0| |gpt2-medium|512|DP_2GPU|1884|0.2|3.3| |gpt2-medium|512|DDP_2GPU|1053|0.4|1.8| |gpt2-medium|512|DDP_no_P2P_2GPU|472|0.8|0.8| |gpt2-medium|512|DP_4GPU|1177|0.2|2.1| |gpt2-medium|512|DDP_4GPU|462|0.4|0.8| |gpt2-medium|512|DDP_no_P2P_4GPU|278|0.7|0.5| | | | | | | | |gpt2-large|128|1GPU|402|2.0|1.0| |gpt2-large|128|DP_2GPU|3181|0.1|7.9| |gpt2-large|128|DDP_2GPU|1760|0.2|4.4| |gpt2-large|128|DDP_no_P2P_2GPU|565|0.7|1.4| |gpt2-large|128|DP_4GPU|1361|0.1|3.4| |gpt2-large|128|DDP_4GPU|717|0.3|1.8| |gpt2-large|128|DDP_no_P2P_4GPU|349|0.6|0.9| | | | | | | | |gpt2-large|256|1GPU|642|1.2|1.0| |gpt2-large|256|DP_2GPU|3440|0.1|5.4| |gpt2-large|256|DDP_2GPU|1882|0.2|2.9| |gpt2-large|256|DDP_no_P2P_2GPU|686|0.6|1.1| |gpt2-large|256|DP_4GPU|1673|0.1|2.6| |gpt2-large|256|DDP_4GPU|770|0.3|1.2| |gpt2-large|256|DDP_no_P2P_4GPU|403|0.5|0.6| | | | | | | | |gpt2-large|512|1GPU|1168|0.7|1.0| |gpt2-large|512|DP_2GPU|3947|0.1|3.4| |gpt2-large|512|DDP_2GPU|2145|0.2|1.8| |gpt2-large|512|DDP_no_P2P_2GPU|952|0.4|0.8| |gpt2-large|512|DP_4GPU|2303|0.1|2.0| |gpt2-large|512|DDP_4GPU|902|0.2|0.8| |gpt2-large|512|DDP_no_P2P_4GPU|531|0.4|0.5| | | | | | | | |gpt2-xl|128|1GPU|770|1.0|1.0| |gpt2-xl|128|DP_2GPU|6391|0.1|8.3| |gpt2-xl|128|DDP_2GPU|3396|0.1|4.4| |gpt2-xl|128|DDP_no_P2P_2GPU|751|0.5|1.0| |gpt2-xl|128|DP_4GPU|2588|0.1|3.4| |gpt2-xl|128|DDP_4GPU|1356|0.1|1.8| |gpt2-xl|128|DDP_no_P2P_4GPU|635|0.3|0.8| | | | | | | | |gpt2-xl|256|1GPU|1210|0.7|1.0| |gpt2-xl|256|DP_2GPU|6826|0.1|5.6| |gpt2-xl|256|DP_4GPU|3130|0.1|2.6| <|||||>Thank you for doing this immense work, @moyix! From a quick look it appears the model size doesn't matter, but the block-size makes a big difference to a faster outcome with the various DDP approaches - the larger the block the more benefits one gets, and for small blocks the performance is terrible.<|||||>@JJack0812, your issue report won't get addresses here as we are talking about a totally different topic in this thread - I'd say post a separate issue - may be under pytorch or transformers, but first study [existing tickets](https://www.google.com/search?q=RuntimeError%3A+NCCL+error+in%3A+%2Fpytorch%2Ftorch%2Flib%2Fc10d%2FProcessGroupNCCL.cpp), e.g.: [this one](https://github.com/pytorch/pytorch/issues/39388) <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,370
closed
Custom train/validation file not supported in run_qa.py
**Environment info** transformers version: 4.0.1 Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.10 Python version: 3.8.5 PyTorch version (GPU?): 1.7.1+cu110 (True) Tensorflow version (GPU?): not installed (NA) Using GPU in script?: yes I am trying to pass custom dataset or modified squad dataset (in valid squad format only) using parameters --train_file = train-v1.1.json \ --validation_file = dev-v1.1.json \ but it does not work for me g from the official documentation, **https://github.com/huggingface/transformers/tree/master/examples/question-answering** this script runs fine: ``` python run_qa.py \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ ``` but if I use the below script: ``` python run_qa.py \ --model_name_or_path bert-base-uncased \ --train_file = train-v1.1.json \ --validation_file = dev-v1.1.json \ --do_train \ --do_eval \ --per_device_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /data1/debug_squad1/ ``` **for data** train-v1.1.json. dev-v1.1.json / train.csv, dev.csv error: ``` 2020-12-31 12:00:59.821145: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2020-12-31 12:00:59.821182: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "run_qa.py", line 469, in <module> main() File "run_qa.py", line 159, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/media/data2/anaconda/envs/bertQA-env/lib/python3.8/site-packages/transformers/hf_argparser.py", line 135, in parse_args_into_dataclasses obj = dtype(**inputs) File "<string>", line 16, in __init__ File "run_qa.py", line 142, in __post_init__ assert extension in ["csv", "json"], "`train_file` should be a csv or a json file." AssertionError: `train_file` should be a csv or a json file. ``` the train_file, validation_file is a valid parameter in run_qa.py file. Can someone please help with how can we train on specific dataset?
12-31-2020 12:17:47
12-31-2020 12:17:47
Hi @BatMrE The syntax of the command is wrong, there should be no spaces around `=` or you can also just remove the `=` So it should be either like this ```bash python run_qa.py \ --model_name_or_path bert-base-uncased \ --train_file=train-v1.1.json \ --validation_file=dev-v1.1.json \ ``` or this ```bash python run_qa.py \ --model_name_or_path bert-base-uncased \ --train_file train-v1.1.json \ --validation_file dev-v1.1.json \ ```<|||||>removing the spaces worked for me, thoe I'm still not able to run that script getting: ``` Traceback (most recent call last): File "run_qa.py", line 469, in <module> main() File "run_qa.py", line 252, in main answer_column_name = "answers" if "answers" in column_names else column_names[2] IndexError: list index out of range ``` Note: I am using official training and dev json file to run the script please see if someone can help. @patrickvonplaten / @stas00 / @vasudevgupta7<|||||>@sgugger might know this.<|||||>I had the same problem, here's what I found. If you read through the script, you'll see it uses the `datasets.load_dataset()` function to load your data (line 211). As commented in the script check out [https://huggingface.co/docs/datasets/loading_datasets.html](https://huggingface.co/docs/datasets/loading_datasets.html) to learn more. I noticed it doesn't natively support squad style json files. However you can: - Use one of the supported formats; - create your own dataset loading script or [adapt an existing loading script](https://huggingface.co/docs/datasets/add_dataset.html#dataset-scripts-of-reference); - or use the [squad.py loading script](https://github.com/huggingface/datasets/blob/master/datasets/squad/squad.py). You'll have to adapt the run_qa.py script a bit to use your loading script. <|||||>@Jos1988 I am bit confused in how to use squad.py file for conversion of data I have tried this `dataset = load_dataset('squad', ...)` <|||||>@BatMrE download the [squad.py](https://github.com/huggingface/datasets/blob/master/datasets/squad/squad.py) script and change the first few lines of _split_generators function(see the code below) to make `dl_manager` use your local QA dataset files instead of downloading the squad data. `self.config.data_files` uses the data_files you pass to `load_dataset` function. ```python def _split_generators(self, dl_manager): if not self.config.data_files: raise ValueError( f"At least one data file must be specified, but got data_files={self.config.data_files}" ) downloaded_files = dl_manager.download_and_extract(self.config.data_files) ...... ``` Aftter you do the above changes, just load your dataset using: ```python dataset = load_dataset(<path to changed squad.py dataloader>, data_files={'train': <train-path>, 'validation': <validation-path>}) ``` The `data_files` contains the paths to your local train and dev QA datasets which are in squad format<|||||>I have made all the expected changes - Made changes in squad.py file - datasets = load_dataset('squad.py', data_files={'train': 'train_custom.json', 'validation': 'dev_custom.json'}) passing my custom file (which is different from orignal squad v1 files) **Note : code hits the custom file as if I pass irrelevant name it will throw error of file not found** I am getting the expected results but it is exactly same as the result I get on running ``` python run_qa.py \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ ``` my current run script: ``` python run_qa.py \ --model_name_or_path bert-base-uncased \ --train_file=train_custom.json \ --validation_file=dev_custom.json \ --do_train \ --do_eval \ --per_device_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir/tmp/debug_squad2/ ``` also my _split_generators function in squad.py : ``` def _split_generators(self, dl_manager): if not self.config.data_files: raise ValueError( f"At least one data file must be specified, but got data_files={self.config.data_files}" ) downloaded_files = dl_manager.download_and_extract(_URLS) return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}), datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}), ] ``` how is it possible to get exactly same results for custom and official script, can someone please recommend something.. @gowtham1997 @Jos1988<|||||>@BatMrE Instead of `downloaded_files = dl_manager.download_and_extract(_URLS)` , use `downloaded_files = dl_manager.download_and_extract(self.config.data_files)` `downloaded_files = dl_manager.download_and_extract(_URLS)` downloads the squad dataset from _URLS specified in the squad.py loader file. You should instead use the local dataset files passed with `config.data_files` <|||||>Thanks @gowtham1997 , I have done some hardcoding in squad.py file to send my custom data files ``` _URLS = { "train": "train_custom.json", "dev": "dev_custom.json", } ``` Just one more thing.. I am able to use any custom data made on top of squad version 1, but I am not able to use squad version 2. As I am aware we need to use run_squad.py for squad version 2 and not run_qa, can some one add some comments on it. <|||||>@sgugger can you lend a hand here? I have ran into the same problem but with a different error ``` Traceback (most recent call last): File "/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/builder.py", line 434, in incomplete_dir yield tmp_dir File "/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/builder.py", line 553, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/builder.py", line 897, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/tqdm/std.py", line 1130, in __iter__ for obj in iterable: File "/home/abashir/.cache/huggingface/modules/datasets_modules/datasets/json/fb88b12bd94767cb0cc7eedcd82ea1f402d2162addc03a37e81d4f8dc7313ad9/json.py", line 75, in _generate_tables parse_options=self.config.pa_parse_options, File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/GW/Health-Corpus/work/UMLS/transformers/examples/question-answering/run_qa.py", line 495, in <module> main() File "/GW/Health-Corpus/work/UMLS/transformers/examples/question-answering/run_qa.py", line 222, in main datasets = load_dataset(extension, data_files=data_files, field="data") File "/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/builder.py", line 483, in download_and_prepare self._save_info() File "/home/abashir/anaconda3/envs/mpi/lib/python3.7/contextlib.py", line 130, in __exit__ self.gen.throw(type, value, traceback) File "/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/builder.py", line 440, in incomplete_dir shutil.rmtree(tmp_dir) File "/home/abashir/anaconda3/envs/mpi/lib/python3.7/shutil.py", line 498, in rmtree onerror(os.rmdir, path, sys.exc_info()) File "/home/abashir/anaconda3/envs/mpi/lib/python3.7/shutil.py", line 496, in rmtree os.rmdir(path) OSError: [Errno 39] Directory not empty: '/home/abashir/.cache/huggingface/datasets/json/default-43dfe5d134316dba/0.0.0/fb88b12bd94767cb0cc7eedcd82ea1f402d2162addc03a37e81d4f8dc7313ad9.incomplete' ``` When tried the above fixes. altering the datasers line with loading the `squad.py` altered script I run into ``` 30a174f57e692deb3b377336683/squad.py", line 106, in _split_generators datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}), KeyError: 'dev' ```<|||||>@thomwolf <|||||>@abdallah197 Can you please share the script you are trying, and squad.py file changes you have done <|||||>After using modified `squad.py` and converting data to JSON. It loads the data without error but when it starts training I got the following error message. @gowtham1997 ``` [INFO|trainer.py:837] 2021-03-04 01:19:16,915 >> ***** Running training ***** [INFO|trainer.py:838] 2021-03-04 01:19:16,915 >> Num examples = 14842 [INFO|trainer.py:839] 2021-03-04 01:19:16,916 >> Num Epochs = 5 [INFO|trainer.py:840] 2021-03-04 01:19:16,916 >> Instantaneous batch size per device = 16 [INFO|trainer.py:841] 2021-03-04 01:19:16,916 >> Total train batch size (w. parallel, distributed & accumulation) = 48 [INFO|trainer.py:842] 2021-03-04 01:19:16,916 >> Gradient Accumulation steps = 1 [INFO|trainer.py:843] 2021-03-04 01:19:16,916 >> Total optimization steps = 1550 0%| | 0/1550 [00:00<?, ?it/s]Traceback (most recent call last): File "/okyanus/users/ctantug/transformers/examples/question-answering/run_qa.py", line 507, in <module> main() File "/okyanus/users/ctantug/transformers/examples/question-answering/run_qa.py", line 481, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 940, in train tr_loss += self.training_step(model, inputs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 1304, in training_step loss = self.compute_loss(model, inputs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 1334, in compute_loss outputs = model(**inputs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/_utils.py", line 428, in reraise raise self.exc_type(msg) ValueError: Caught ValueError in replica 0 on device 0. Original Traceback (most recent call last): File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1793, in forward start_logits, end_logits = logits.split(1, dim=-1) ValueError: too many values to unpack (expected 2) ```<|||||>> After using modified `squad.py` and converting data to JSON. It loads the data without error but when it starts training I got the following error message. @gowtham1997 > > ``` > [INFO|trainer.py:837] 2021-03-04 01:19:16,915 >> ***** Running training ***** > [INFO|trainer.py:838] 2021-03-04 01:19:16,915 >> Num examples = 14842 > [INFO|trainer.py:839] 2021-03-04 01:19:16,916 >> Num Epochs = 5 > [INFO|trainer.py:840] 2021-03-04 01:19:16,916 >> Instantaneous batch size per device = 16 > [INFO|trainer.py:841] 2021-03-04 01:19:16,916 >> Total train batch size (w. parallel, distributed & accumulation) = 48 > [INFO|trainer.py:842] 2021-03-04 01:19:16,916 >> Gradient Accumulation steps = 1 > [INFO|trainer.py:843] 2021-03-04 01:19:16,916 >> Total optimization steps = 1550 > > 0%| | 0/1550 [00:00<?, ?it/s]Traceback (most recent call last): > File "/okyanus/users/ctantug/transformers/examples/question-answering/run_qa.py", line 507, in <module> > main() > File "/okyanus/users/ctantug/transformers/examples/question-answering/run_qa.py", line 481, in main > train_result = trainer.train(resume_from_checkpoint=checkpoint) > File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 940, in train > tr_loss += self.training_step(model, inputs) > File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 1304, in training_step > loss = self.compute_loss(model, inputs) > File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 1334, in compute_loss > outputs = model(**inputs) > File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl > result = self.forward(*input, **kwargs) > File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward > outputs = self.parallel_apply(replicas, inputs, kwargs) > File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply > return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) > File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply > output.reraise() > File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/_utils.py", line 428, in reraise > raise self.exc_type(msg) > ValueError: Caught ValueError in replica 0 on device 0. > Original Traceback (most recent call last): > File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker > output = module(*input, **kwargs) > File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl > result = self.forward(*input, **kwargs) > File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1793, in forward > start_logits, end_logits = logits.split(1, dim=-1) > ValueError: too many values to unpack (expected 2) > ``` Solved it. Turns out I have to change the config file to have only two labels (one for the first sentence and one for the second).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,369
closed
TF >= 2.3 cleaning
# What does this PR do? The minimal TF version has recently been fixed to >=2.3, this PR remove all the <2.3 calls, mostly replacing experimental features by their stable ones.
12-31-2020 11:23:49
12-31-2020 11:23:49
transformers
9,368
closed
Fix utils on Windows
# What does this PR do? This PR fixes `check_repo` for Windows execution.
12-31-2020 10:51:16
12-31-2020 10:51:16
transformers
9,367
closed
Add-support-for-examples-scripts-to-run-on-sagemaker
Hello Guys, i am currently working on how we could edit/extend the fine-tuning scripts from `examples/` to work out-of-the-box within sagemaker. For that i adjusted the [`run_glue.py` script](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py). To test it I created a [custom huggingface extension for sagemaker](https://github.com/philschmid/sagemaker-sdk-huggingface) where I created a sagemaker compatible docker container and a huggingface estimator. The container was build with the `transformers==4.1.1` and `datasets==1.1.3`. That is also the reason why I only adjusted the `run_glue.py` and not any other files. The `run_glue.py` can i dynamically pass into the Sagemaker Training Job, but when i would adjust any other files yet i would have to rebuild the container... . For all the functions, which would move to a different directory I added a comment `# Should be moved to path_to_file/filename.py`. As an Example how you could use this to create a Sagemaker training job using the extension i build you would create an `HuggingFace()` Estimator and then call `.fit()`. The example i used is demonstrated below or you can find it in this [github repostiroy](https://github.com/philschmid/sagemaker-sdk-huggingface/blob/main/examples/06_transformers_existing_training_scripts/sagemaker-notebook.ipynb) ```python from huggingface.estimator import HuggingFace huggingface_estimator = HuggingFace(entry_point='run_glue.py', source_dir='../../transformers/examples/text-classification', sagemaker_session=sess, base_job_name='huggingface-sdk-extension', instance_type='ml.p3.2xlarge', instance_count=1, role=role, framework_version={'transformers':'4.1.1','datasets':'1.1.3'}, py_version='py3', hyperparameters = { 'model_name_or_path': 'distilbert-base-cased', 'task_name':'MRPC', 'do_train': True, 'do_eval': True, 'max_seq_length':'128', 'per_device_train_batch_size':32, 'learning_rate':2e-5, 'num_train_epochs': 3.0 }) huggingface_estimator.fit() ``` **_Note:_ Sagemaker Requirements** In Sagemaker you can define Hyperparameters, which are getting passed into the training script within the `HuggingFace(hyperparameters={})` dictonary. This parameter will be then passed into the training script as named arguments. So the hyperparameters from the example are going to look like this when the training script is executed. `--do_eval True --do_train True --learning_rate 2e-05 --max_seq_length 128 --model_name_or_path distilbert-base-cased --num_train_epochs 3.0 --output_dir Not defined sagemaker --per_device_train_batch_size 32 --task_name MRPC`. ### How I proceeded 1. I created a function `is_run_on_sagemaker()` to determine if the script is running in a Sagemaker Runtime environment. This function should be move to the `transformers/src/transformers/file_utils.py` file. 2. I had to adjust the `sys.argv` because: 1. `TrainingArguments` are expecting the parameter `output_dir`, but in a Sagemaker Runtime the output_dir is defined from the enviroment variable `SM_OUTPUT_DATA_DIR`. 2. `TrainingArguments` are expecting for boolean parameters not a `True` as value. If `--train_do` exist its `True` otherwise its `False`. In Sagemaker you cannot pass keys only so i removed all `True`s from the `sys.argv` at the beginning. A better solution could that we adjust the HfArgumentParser to accept `'True'` for boolean arguments. 3. Therefore i created an `parse_sagemaker_args()` function which: - first adds the `--output_dir` with the correct value for Sagemaker - Secound parses alle existing environment variables to check if the datasets are passed into training job. When you run a fine-tuning script in sagemaker you can pass data into `.fit()` which is on S3 and will be downloaded before the training starts. I added two options you can either add the the direct S3 uri to a file (e.g. `s3://my-data-bucket/path/to/my/training/data.csv`) or you can add a path (e.g. `s3://my-data-bucket/path/to/data`) and pass the file as hyperparameters `train_file`. - Third I had to remove all `True`s from the `sys.argv` for the boolean parameters. 4. Adjusted all file saving and model saving section and added conditions if the script is run on Sagemaker. #### Testing I tested it using the jupyter notebook I provided at the top. The log of the training script is attached: <details> <summary>details:</summary> ```bash 2020-12-31 08:22:11 Starting - Starting the training job... 2020-12-31 08:22:34 Starting - Launching requested ML instancesProfilerReport-1609402930: InProgress ...... 2020-12-31 08:23:35 Starting - Preparing the instances for training...... 2020-12-31 08:24:36 Downloading - Downloading input data 2020-12-31 08:24:36 Training - Downloading the training image..................... 2020-12-31 08:28:12 Training - Training image download completed. Training in progress..bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell 2020-12-31 08:28:12,243 sagemaker-training-toolkit INFO Imported framework sagemaker_pytorch_container.training 2020-12-31 08:28:12,266 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed. 2020-12-31 08:28:12,498 sagemaker_pytorch_container.training INFO Invoking user training script. 2020-12-31 08:28:12,878 sagemaker-training-toolkit INFO Installing dependencies from requirements.txt: /opt/conda/bin/python -m pip install -r requirements.txt Requirement already satisfied: datasets>=1.1.3 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (1.1.3) Requirement already satisfied: protobuf in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 3)) (3.14.0) Requirement already satisfied: multiprocess in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.70.11.1) Requirement already satisfied: pandas in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (1.1.5) Requirement already satisfied: tqdm<4.50.0,>=4.27 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (4.49.0) Requirement already satisfied: dataclasses in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.8) Requirement already satisfied: requests>=2.19.0 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (2.25.1) Requirement already satisfied: xxhash in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (2.0.0) Requirement already satisfied: pyarrow>=0.17.1 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (2.0.0) Requirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (1.19.1) Requirement already satisfied: dill in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.3.3) Collecting sentencepiece!=0.1.92 Downloading sentencepiece-0.1.94-cp36-cp36m-manylinux2014_x86_64.whl (1.1 MB) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (1.25.11) Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (3.0.4) Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (2.10) Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (2020.12.5) Requirement already satisfied: six>=1.9 in /opt/conda/lib/python3.6/site-packages (from protobuf->-r requirements.txt (line 3)) (1.15.0) Requirement already satisfied: python-dateutil>=2.7.3 in /opt/conda/lib/python3.6/site-packages (from pandas->datasets>=1.1.3->-r requirements.txt (line 1)) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /opt/conda/lib/python3.6/site-packages (from pandas->datasets>=1.1.3->-r requirements.txt (line 1)) (2020.4) Installing collected packages: sentencepiece Successfully installed sentencepiece-0.1.94 2020-12-31 08:28:15,036 sagemaker-training-toolkit INFO Invoking user script Training Env: { "additional_framework_parameters": {}, "channel_input_dirs": {}, "current_host": "algo-1", "framework_module": "sagemaker_pytorch_container.training:main", "hosts": [ "algo-1" ], "hyperparameters": { "task_name": "MRPC", "do_train": true, "num_train_epochs": 3.0, "do_eval": true, "max_seq_length": "128", "per_device_train_batch_size": 32, "learning_rate": 2e-05, "model_name_or_path": "distilbert-base-cased" }, "input_config_dir": "/opt/ml/input/config", "input_data_config": {}, "input_dir": "/opt/ml/input", "is_master": true, "job_name": "huggingface-sdk-extension-2020-12-31-08-22-10-312", "log_level": 20, "master_hostname": "algo-1", "model_dir": "/opt/ml/model", "module_dir": "s3://sagemaker-eu-central-1-558105141721/huggingface-sdk-extension-2020-12-31-08-22-10-312/source/sourcedir.tar.gz", "module_name": "run_glue", "network_interface_name": "eth0", "num_cpus": 8, "num_gpus": 1, "output_data_dir": "/opt/ml/output/data", "output_dir": "/opt/ml/output", "output_intermediate_dir": "/opt/ml/output/intermediate", "resource_config": { "current_host": "algo-1", "hosts": [ "algo-1" ], "network_interface_name": "eth0" }, "user_entry_point": "run_glue.py" } Environment variables: SM_HOSTS=["algo-1"] SM_NETWORK_INTERFACE_NAME=eth0 SM_HPS={"do_eval":true,"do_train":true,"learning_rate":2e-05,"max_seq_length":"128","model_name_or_path":"distilbert-base-cased","num_train_epochs":3.0,"per_device_train_batch_size":32,"task_name":"MRPC"} SM_USER_ENTRY_POINT=run_glue.py SM_FRAMEWORK_PARAMS={} SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"} SM_INPUT_DATA_CONFIG={} SM_OUTPUT_DATA_DIR=/opt/ml/output/data SM_CHANNELS=[] SM_CURRENT_HOST=algo-1 SM_MODULE_NAME=run_glue SM_LOG_LEVEL=20 SM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main SM_INPUT_DIR=/opt/ml/input SM_INPUT_CONFIG_DIR=/opt/ml/input/config SM_OUTPUT_DIR=/opt/ml/output SM_NUM_CPUS=8 SM_NUM_GPUS=1 SM_MODEL_DIR=/opt/ml/model SM_MODULE_DIR=s3://sagemaker-eu-central-1-558105141721/huggingface-sdk-extension-2020-12-31-08-22-10-312/source/sourcedir.tar.gz SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_pytorch_container.training:main","hosts":["algo-1"],"hyperparameters":{"do_eval":true,"do_train":true,"learning_rate":2e-05,"max_seq_length":"128","model_name_or_path":"distilbert-base-cased","num_train_epochs":3.0,"per_device_train_batch_size":32,"task_name":"MRPC"},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"huggingface-sdk-extension-2020-12-31-08-22-10-312","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-eu-central-1-558105141721/huggingface-sdk-extension-2020-12-31-08-22-10-312/source/sourcedir.tar.gz","module_name":"run_glue","network_interface_name":"eth0","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"run_glue.py"} SM_USER_ARGS=["--do_eval","True","--do_train","True","--learning_rate","2e-05","--max_seq_length","128","--model_name_or_path","distilbert-base-cased","--num_train_epochs","3.0","--per_device_train_batch_size","32","--task_name","MRPC"] SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate SM_HP_TASK_NAME=MRPC SM_HP_DO_TRAIN=true SM_HP_NUM_TRAIN_EPOCHS=3.0 SM_HP_DO_EVAL=true SM_HP_MAX_SEQ_LENGTH=128 SM_HP_PER_DEVICE_TRAIN_BATCH_SIZE=32 SM_HP_LEARNING_RATE=2e-05 SM_HP_MODEL_NAME_OR_PATH=distilbert-base-cased PYTHONPATH=/opt/ml/code:/opt/conda/bin:/opt/conda/lib/python36.zip:/opt/conda/lib/python3.6:/opt/conda/lib/python3.6/lib-dynload:/opt/conda/lib/python3.6/site-packages Invoking script with the following command: /opt/conda/bin/python run_glue.py --do_eval True --do_train True --learning_rate 2e-05 --max_seq_length 128 --model_name_or_path distilbert-base-cased --num_train_epochs 3.0 --per_device_train_batch_size 32 --task_name MRPC ['run_glue.py', '--do_eval', '--do_train', '--learning_rate', '2e-05', '--max_seq_length', '128', '--model_name_or_path', 'distilbert-base-cased', '--num_train_epochs', '3.0', '--per_device_train_batch_size', '32', '--task_name', 'MRPC', '--output_dir', '/opt/ml/output/data'] Downloading and preparing dataset glue/mrpc (download: 1.43 MiB, generated: 1.43 MiB, post-processed: Unknown size, total: 2.85 MiB) to /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4... Dataset glue downloaded and prepared to /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4. Subsequent calls will reuse this data. [2020-12-31 08:28:43.990 algo-1:31 INFO json_config.py:90] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json. [2020-12-31 08:28:43.991 algo-1:31 INFO hook.py:193] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries. [2020-12-31 08:28:43.991 algo-1:31 INFO hook.py:238] Saving to /opt/ml/output/tensors [2020-12-31 08:28:43.991 algo-1:31 INFO state_store.py:67] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist. [2020-12-31 08:28:44.017 algo-1:31 INFO hook.py:398] Monitoring the collections: losses [2020-12-31 08:28:44.017 algo-1:31 INFO hook.py:461] Hook is writing from the hook with pid: 31 [2020-12-31 08:28:45.513 algo-1:31 WARNING hook.py:978] var is not Tensor or list or tuple of Tensors, module_name:distilbert.transformer BaseModelOutput [2020-12-31 08:28:45.514 algo-1:31 WARNING hook.py:978] var is not Tensor or list or tuple of Tensors, module_name:distilbert BaseModelOutput [2020-12-31 08:28:45.523 algo-1:31 WARNING hook.py:978] var is not Tensor or list or tuple of Tensors, module_name:DistilBertForSequenceClassification SequenceClassifierOutput {'epoch': 3.0} 12/31/2020 08:28:19 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False 12/31/2020 08:28:19 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='/opt/ml/output/data', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, model_parallel=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=2e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Dec31_08-28-19_algo-1', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='/opt/ml/output/data', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, fp16_backend='auto', sharded_ddp=False) #015Downloading: 0%| | 0.00/8.68k [00:00<?, ?B/s]#015Downloading: 28.7kB [00:00, 16.1MB/s] #015Downloading: 0%| | 0.00/4.97k [00:00<?, ?B/s]#015Downloading: 28.7kB [00:00, 19.9MB/s] #015Downloading: 0.00B [00:00, ?B/s]#015Downloading: 6.22kB [00:00, 3.90MB/s] #015Downloading: 0.00B [00:00, ?B/s]#015Downloading: 19.7kB [00:00, 106kB/s]#015Downloading: 54.5kB [00:00, 122kB/s]#015Downloading: 124kB [00:00, 152kB/s] #015Downloading: 280kB [00:00, 201kB/s]#015Downloading: 576kB [00:00, 273kB/s]#015Downloading: 959kB [00:01, 369kB/s]#015Downloading: 1.05MB [00:01, 928kB/s] #015Downloading: 0.00B [00:00, ?B/s]#015Downloading: 19.4kB [00:00, 103kB/s]#015Downloading: 54.3kB [00:00, 119kB/s]#015Downloading: 124kB [00:00, 150kB/s] #015Downloading: 298kB [00:00, 200kB/s]#015Downloading: 441kB [00:00, 582kB/s] #0150 examples [00:00, ? examples/s]#0151705 examples [00:00, 17044.33 examples/s]#0153300 examples [00:00, 16698.53 examples/s]#015 #015#0150 examples [00:00, ? examples/s]#015 #015#0150 examples [00:00, ? examples/s]#015 #01512/31/2020 08:28:28 - INFO - filelock - Lock 139800303634584 acquired on /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a.lock [INFO|file_utils.py:1301] 2020-12-31 08:28:28,367 >> https://huggingface.co/distilbert-base-cased/resolve/main/config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmplyt9e_gw #015Downloading: 0%| | 0.00/411 [00:00<?, ?B/s]#015Downloading: 100%|██████████| 411/411 [00:00<00:00, 496kB/s] [INFO|file_utils.py:1305] 2020-12-31 08:28:28,649 >> storing https://huggingface.co/distilbert-base-cased/resolve/main/config.json in cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a [INFO|file_utils.py:1308] 2020-12-31 08:28:28,649 >> creating metadata file for /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a 2020-12-31 08:29:30,381 sagemaker-training-toolkit INFO Reporting training SUCCESS 12/31/2020 08:28:28 - INFO - filelock - Lock 139800303634584 released on /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a.lock [INFO|configuration_utils.py:431] 2020-12-31 08:28:28,650 >> loading configuration file https://huggingface.co/distilbert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a [INFO|configuration_utils.py:467] 2020-12-31 08:28:28,651 >> Model config DistilBertConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "finetuning_task": "mrpc", "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "output_past": true, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "vocab_size": 28996 } [INFO|configuration_utils.py:431] 2020-12-31 08:28:28,933 >> loading configuration file https://huggingface.co/distilbert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a [INFO|configuration_utils.py:467] 2020-12-31 08:28:28,933 >> Model config DistilBertConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "output_past": true, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "vocab_size": 28996 } 12/31/2020 08:28:29 - INFO - filelock - Lock 139797608840104 acquired on /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791.lock [INFO|file_utils.py:1301] 2020-12-31 08:28:29,217 >> https://huggingface.co/bert-base-cased/resolve/main/vocab.txt not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpvm6yksc0 #015Downloading: 0%| | 0.00/213k [00:00<?, ?B/s]#015Downloading: 17%|█▋ | 36.9k/213k [00:00<00:00, 212kB/s]#015Downloading: 94%|█████████▍| 201k/213k [00:00<00:00, 282kB/s] #015Downloading: 100%|██████████| 213k/213k [00:00<00:00, 604kB/s] [INFO|file_utils.py:1305] 2020-12-31 08:28:29,855 >> storing https://huggingface.co/bert-base-cased/resolve/main/vocab.txt in cache at /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791 [INFO|file_utils.py:1308] 2020-12-31 08:28:29,855 >> creating metadata file for /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791 12/31/2020 08:28:29 - INFO - filelock - Lock 139797608840104 released on /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791.lock 12/31/2020 08:28:30 - INFO - filelock - Lock 139797608841112 acquired on /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6.lock [INFO|file_utils.py:1301] 2020-12-31 08:28:30,143 >> https://huggingface.co/bert-base-cased/resolve/main/tokenizer.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp5vnay570 #015Downloading: 0%| | 0.00/436k [00:00<?, ?B/s]#015Downloading: 8%|▊ | 36.9k/436k [00:00<00:01, 214kB/s]#015Downloading: 46%|████▌ | 201k/436k [00:00<00:00, 284kB/s] #015Downloading: 100%|██████████| 436k/436k [00:00<00:00, 1.10MB/s] [INFO|file_utils.py:1305] 2020-12-31 08:28:30,827 >> storing https://huggingface.co/bert-base-cased/resolve/main/tokenizer.json in cache at /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6 [INFO|file_utils.py:1308] 2020-12-31 08:28:30,827 >> creating metadata file for /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6 12/31/2020 08:28:30 - INFO - filelock - Lock 139797608841112 released on /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6.lock [INFO|tokenization_utils_base.py:1802] 2020-12-31 08:28:30,827 >> loading file https://huggingface.co/bert-base-cased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791 [INFO|tokenization_utils_base.py:1802] 2020-12-31 08:28:30,827 >> loading file https://huggingface.co/bert-base-cased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6 12/31/2020 08:28:31 - INFO - filelock - Lock 139800303634584 acquired on /root/.cache/huggingface/transformers/9c9f39769dba4c5fe379b4bc82973eb01297bd607954621434eb9f1bc85a23a0.06b428c87335c1bb22eae46fdab31c8286efa0aa09e898a7ac42ddf5c3f5dc19.lock [INFO|file_utils.py:1301] 2020-12-31 08:28:31,151 >> https://huggingface.co/distilbert-base-cased/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpi2h8yubw #015Downloading: 0%| | 0.00/263M [00:00<?, ?B/s]#015Downloading: 2%|▏ | 4.13M/263M [00:00<00:06, 41.3MB/s]#015Downloading: 3%|▎ | 8.25M/263M [00:00<00:06, 41.2MB/s]#015Downloading: 5%|▍ | 12.8M/263M [00:00<00:05, 42.4MB/s]#015Downloading: 7%|▋ | 17.5M/263M [00:00<00:05, 43.8MB/s]#015Downloading: 9%|▊ | 22.4M/263M [00:00<00:05, 45.2MB/s]#015Downloading: 10%|█ | 27.3M/263M [00:00<00:05, 46.2MB/s]#015Downloading: 12%|█▏ | 32.2M/263M [00:00<00:04, 47.2MB/s]#015Downloading: 14%|█▍ | 37.3M/263M [00:00<00:04, 48.1MB/s]#015Downloading: 16%|█▌ | 42.3M/263M [00:00<00:04, 48.7MB/s]#015Downloading: 18%|█▊ | 47.3M/263M [00:01<00:04, 49.1MB/s]#015Downloading: 20%|█▉ | 52.3M/263M [00:01<00:04, 49.4MB/s]#015Downloading: 22%|██▏ | 57.6M/263M [00:01<00:04, 50.4MB/s]#015Downloading: 24%|██▍ | 63.7M/263M [00:01<00:03, 53.3MB/s]#015Downloading: 27%|██▋ | 69.9M/263M [00:01<00:03, 55.6MB/s]#015Downloading: 29%|██▉ | 76.1M/263M [00:01<00:03, 57.3MB/s]#015Downloading: 31%|███▏ | 82.3M/263M [00:01<00:03, 58.6MB/s]#015Downloading: 33%|███▎ | 88.2M/263M [00:01<00:02, 58.6MB/s]#015Downloading: 36%|███▌ | 94.5M/263M [00:01<00:02, 59.8MB/s]#015Downloading: 38%|███▊ | 101M/263M [00:01<00:02, 60.7MB/s] #015Downloading: 41%|████ | 107M/263M [00:02<00:02, 57.8MB/s]#015Downloading: 43%|████▎ | 113M/263M [00:02<00:02, 55.2MB/s]#015Downloading: 45%|████▍ | 118M/263M [00:02<00:02, 52.6MB/s]#015Downloading: 47%|████▋ | 124M/263M [00:02<00:02, 51.7MB/s]#015Downloading: 49%|████▉ | 129M/263M [00:02<00:02, 51.1MB/s]#015Downloading: 51%|█████ | 134M/263M [00:02<00:02, 50.8MB/s]#015Downloading: 53%|█████▎ | 139M/263M [00:02<00:02, 50.7MB/s]#015Downloading: 55%|█████▍ | 144M/263M [00:02<00:02, 49.6MB/s]#015Downloading: 57%|█████▋ | 149M/263M [00:02<00:02, 49.7MB/s]#015Downloading: 59%|█████▊ | 154M/263M [00:02<00:02, 49.9MB/s]#015Downloading: 60%|██████ | 159M/263M [00:03<00:02, 49.9MB/s]#015Downloading: 62%|██████▏ | 164M/263M [00:03<00:01, 49.6MB/s]#015Downloading: 64%|██████▍ | 169M/263M [00:03<00:01, 49.7MB/s]#015Downloading: 66%|██████▌ | 174M/263M [00:03<00:01, 49.8MB/s]#015Downloading: 68%|██████▊ | 179M/263M [00:03<00:01, 49.9MB/s]#015Downloading: 70%|██████▉ | 184M/263M [00:03<00:01, 49.9MB/s]#015Downloading: 72%|███████▏ | 189M/263M [00:03<00:01, 50.0MB/s]#015Downloading: 74%|███████▍ | 194M/263M [00:03<00:01, 50.0MB/s]#015Downloading: 76%|███████▌ | 199M/263M [00:03<00:01, 50.1MB/s]#015Downloading: 78%|███████▊ | 205M/263M [00:03<00:01, 51.3MB/s]#015Downloading: 80%|████████ | 211M/263M [00:04<00:00, 53.9MB/s]#015Downloading: 82%|████████▏ | 217M/263M [00:04<00:00, 56.1MB/s]#015Downloading: 85%|████████▍ | 223M/263M [00:04<00:00, 57.3MB/s]#015Downloading: 87%|████████▋ | 229M/263M [00:04<00:00, 58.6MB/s]#015Downloading: 89%|████████▉ | 235M/263M [00:04<00:00, 59.7MB/s]#015Downloading: 92%|█████████▏| 241M/263M [00:04<00:00, 58.4MB/s]#015Downloading: 94%|█████████▍| 247M/263M [00:04<00:00, 52.6MB/s]#015Downloading: 96%|█████████▌| 253M/263M [00:04<00:00, 51.7MB/s]#015Downloading: 98%|█████████▊| 258M/263M [00:04<00:00, 50.8MB/s]#015Downloading: 100%|█████████▉| 263M/263M [00:05<00:00, 50.9MB/s]#015Downloading: 100%|██████████| 263M/263M [00:05<00:00, 52.2MB/s] [INFO|file_utils.py:1305] 2020-12-31 08:28:36,253 >> storing https://huggingface.co/distilbert-base-cased/resolve/main/pytorch_model.bin in cache at /root/.cache/huggingface/transformers/9c9f39769dba4c5fe379b4bc82973eb01297bd607954621434eb9f1bc85a23a0.06b428c87335c1bb22eae46fdab31c8286efa0aa09e898a7ac42ddf5c3f5dc19 [INFO|file_utils.py:1308] 2020-12-31 08:28:36,253 >> creating metadata file for /root/.cache/huggingface/transformers/9c9f39769dba4c5fe379b4bc82973eb01297bd607954621434eb9f1bc85a23a0.06b428c87335c1bb22eae46fdab31c8286efa0aa09e898a7ac42ddf5c3f5dc19 12/31/2020 08:28:36 - INFO - filelock - Lock 139800303634584 released on /root/.cache/huggingface/transformers/9c9f39769dba4c5fe379b4bc82973eb01297bd607954621434eb9f1bc85a23a0.06b428c87335c1bb22eae46fdab31c8286efa0aa09e898a7ac42ddf5c3f5dc19.lock [INFO|modeling_utils.py:1024] 2020-12-31 08:28:36,253 >> loading weights file https://huggingface.co/distilbert-base-cased/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/9c9f39769dba4c5fe379b4bc82973eb01297bd607954621434eb9f1bc85a23a0.06b428c87335c1bb22eae46fdab31c8286efa0aa09e898a7ac42ddf5c3f5dc19 [WARNING|modeling_utils.py:1132] 2020-12-31 08:28:38,515 >> Some weights of the model checkpoint at distilbert-base-cased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_projector.bias'] - This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:1143] 2020-12-31 08:28:38,515 >> Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-cased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. #015 0%| | 0/4 [00:00<?, ?ba/s]#015 25%|██▌ | 1/4 [00:00<00:00, 9.17ba/s]#015 75%|███████▌ | 3/4 [00:00<00:00, 10.17ba/s]#015100%|██████████| 4/4 [00:00<00:00, 13.12ba/s] #015 0%| | 0/1 [00:00<?, ?ba/s]#015100%|██████████| 1/1 [00:00<00:00, 29.95ba/s] #015 0%| | 0/2 [00:00<?, ?ba/s]#015100%|██████████| 2/2 [00:00<00:00, 14.81ba/s]#015100%|██████████| 2/2 [00:00<00:00, 14.77ba/s] 12/31/2020 08:28:39 - INFO - __main__ - Sample 2619 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'idx': 2916, 'input_ids': [101, 1109, 10830, 1127, 1678, 1146, 1114, 24987, 1149, 13260, 1147, 1692, 1222, 7277, 2180, 5303, 117, 3455, 3081, 5097, 1104, 4961, 1149, 13260, 9966, 1222, 1140, 119, 102, 20661, 1127, 1678, 1146, 1114, 24987, 1149, 13260, 1147, 1692, 1222, 7277, 2180, 5303, 117, 3455, 170, 3081, 118, 3674, 21100, 2998, 1106, 1103, 2175, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'label': 1, 'sentence1': 'The proceedings were taken up with prosecutors outlining their case against Amrozi , reading 33 pages of documents outlining allegations against him .', 'sentence2': 'Proceedings were taken up with prosecutors outlining their case against Amrozi , reading a 33-page accusation letter to the court .'}. 12/31/2020 08:28:39 - INFO - __main__ - Sample 456 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'idx': 509, 'input_ids': [101, 20394, 11252, 1424, 3878, 1684, 1111, 1103, 4116, 118, 5534, 1433, 1132, 170, 6539, 4010, 1111, 9283, 1105, 6646, 1110, 1919, 1344, 3075, 1104, 1397, 3625, 112, 188, 5200, 1728, 1107, 1594, 118, 7820, 20394, 11252, 15449, 119, 102, 9018, 1116, 1107, 20394, 11252, 15449, 112, 188, 4116, 118, 5534, 1433, 1132, 170, 6539, 4010, 1111, 9283, 117, 1105, 6646, 1110, 1919, 1344, 3075, 1104, 3625, 112, 188, 5200, 1728, 1107, 1103, 1594, 118, 187, 15677, 3660, 1805, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'label': 1, 'sentence1': "Chechen officials working for the Moscow-backed government are a frequent target for rebels and tension is running high ahead of next Sunday 's presidential election in war-torn Chechnya .", 'sentence2': "Officials in Chechnya 's Moscow-backed government are a frequent target for rebels , and tension is running high ahead of Sunday 's presidential election in the war-ravaged region ."}. 12/31/2020 08:28:39 - INFO - __main__ - Sample 102 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'idx': 116, 'input_ids': [101, 6433, 111, 11767, 112, 188, 2260, 4482, 7448, 2174, 1116, 5799, 125, 119, 1969, 1827, 1106, 5103, 1495, 119, 1851, 117, 1229, 11896, 1116, 1810, 4426, 2174, 1116, 2204, 127, 119, 126, 1827, 1106, 122, 117, 20278, 119, 1851, 119, 102, 1109, 6433, 111, 11767, 112, 188, 2260, 10146, 1108, 1146, 122, 119, 3453, 1827, 117, 1137, 121, 119, 1407, 3029, 117, 1106, 5311, 1559, 119, 5599, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'label': 0, 'sentence1': "Standard & Poor 's 500 stock index futures declined 4.40 points to 983.50 , while Nasdaq futures fell 6.5 points to 1,206.50 .", 'sentence2': "The Standard & Poor 's 500 Index was up 1.75 points , or 0.18 percent , to 977.68 ."}. #015Downloading: 0%| | 0.00/1.67k [00:00<?, ?B/s]#015Downloading: 4.39kB [00:00, 3.86MB/s] [INFO|trainer.py:388] 2020-12-31 08:28:43,678 >> The following columns in the training set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence2, idx, sentence1. [INFO|trainer.py:388] 2020-12-31 08:28:43,678 >> The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence2, idx, sentence1. [INFO|trainer.py:703] 2020-12-31 08:28:43,680 >> ***** Running training ***** [INFO|trainer.py:704] 2020-12-31 08:28:43,680 >> Num examples = 3668 [INFO|trainer.py:705] 2020-12-31 08:28:43,680 >> Num Epochs = 3 [INFO|trainer.py:706] 2020-12-31 08:28:43,680 >> Instantaneous batch size per device = 32 [INFO|trainer.py:707] 2020-12-31 08:28:43,680 >> Total train batch size (w. parallel, distributed & accumulation) = 32 [INFO|trainer.py:708] 2020-12-31 08:28:43,680 >> Gradient Accumulation steps = 1 [INFO|trainer.py:709] 2020-12-31 08:28:43,680 >> Total optimization steps = 345 #015 0%| | 0/345 [00:00<?, ?it/s]#015 0%| | 1/345 [00:02<11:36, 2.03s/it]#015 1%| | 2/345 [00:02<08:19, 1.46s/it]#015 1%| | 3/345 [00:02<06:01, 1.06s/it]#015 1%| | 4/345 [00:02<04:24, 1.29it/s]#015 1%|▏ | 5/345 [00:02<03:17, 1.72it/s]#015 2%|▏ | 6/345 [00:02<02:30, 2.26it/s]#015 2%|▏ | 7/345 [00:02<01:57, 2.88it/s]#015 2%|▏ | 8/345 [00:02<01:34, 3.57it/s]#015 3%|▎ | 9/345 [00:03<01:18, 4.29it/s]#015 3%|▎ | 10/345 [00:03<01:07, 4.99it/s]#015 3%|▎ | 11/345 [00:03<00:59, 5.64it/s]#015 3%|▎ | 12/345 [00:03<00:53, 6.22it/s]#015 4%|▍ | 13/345 [00:03<00:49, 6.71it/s]#015 4%|▍ | 14/345 [00:03<00:46, 7.09it/s]#015 4%|▍ | 15/345 [00:03<00:44, 7.40it/s]#015 5%|▍ | 16/345 [00:03<00:43, 7.57it/s]#015 5%|▍ | 17/345 [00:03<00:42, 7.75it/s]#015 5%|▌ | 18/345 [00:04<00:41, 7.85it/s]#015 6%|▌ | 19/345 [00:04<00:40, 7.96it/s]#015 6%|▌ | 20/345 [00:04<00:40, 8.02it/s]#015 6%|▌ | 21/345 [00:04<00:40, 8.07it/s]#015 6%|▋ | 22/345 [00:04<00:40, 8.03it/s]#015 7%|▋ | 23/345 [00:04<00:40, 8.04it/s]#015 7%|▋ | 24/345 [00:04<00:39, 8.07it/s]#015 7%|▋ | 25/345 [00:04<00:39, 8.07it/s]#015 8%|▊ | 26/345 [00:05<00:39, 8.11it/s]#015 8%|▊ | 27/345 [00:05<00:39, 8.11it/s]#015 8%|▊ | 28/345 [00:05<00:38, 8.14it/s]#015 8%|▊ | 29/345 [00:05<00:39, 8.10it/s]#015 9%|▊ | 30/345 [00:05<00:39, 8.06it/s]#015 9%|▉ | 31/345 [00:05<00:38, 8.10it/s]#015 9%|▉ | 32/345 [00:05<00:38, 8.13it/s]#015 10%|▉ | 33/345 [00:05<00:38, 8.12it/s]#015 10%|▉ | 34/345 [00:06<00:38, 8.14it/s]#015 10%|█ | 35/345 [00:06<00:38, 8.12it/s]#015 10%|█ | 36/345 [00:06<00:38, 8.10it/s]#015 11%|█ | 37/345 [00:06<00:38, 8.10it/s]#015 11%|█ | 38/345 [00:06<00:37, 8.13it/s]#015 11%|█▏ | 39/345 [00:06<00:37, 8.10it/s]#015 12%|█▏ | 40/345 [00:06<00:37, 8.08it/s]#015 12%|█▏ | 41/345 [00:06<00:37, 8.09it/s]#015 12%|█▏ | 42/345 [00:07<00:37, 8.08it/s]#015 12%|█▏ | 43/345 [00:07<00:37, 8.09it/s]#015 13%|█▎ | 44/345 [00:07<00:37, 8.09it/s]#015 13%|█▎ | 45/345 [00:07<00:37, 8.09it/s]#015 13%|█▎ | 46/345 [00:07<00:37, 8.08it/s]#015 14%|█▎ | 47/345 [00:07<00:36, 8.08it/s]#015 14%|█▍ | 48/345 [00:07<00:36, 8.08it/s]#015 14%|█▍ | 49/345 [00:07<00:36, 8.08it/s]#015 14%|█▍ | 50/345 [00:08<00:36, 8.07it/s]#015 15%|█▍ | 51/345 [00:08<00:36, 8.08it/s]#015 15%|█▌ | 52/345 [00:08<00:36, 8.09it/s]#015 15%|█▌ | 53/345 [00:08<00:36, 8.10it/s]#015 16%|█▌ | 54/345 [00:08<00:35, 8.10it/s]#015 16%|█▌ | 55/345 [00:08<00:35, 8.09it/s]#015 16%|█▌ | 56/345 [00:08<00:35, 8.09it/s]#015 17%|█▋ | 57/345 [00:08<00:35, 8.08it/s]#015 17%|█▋ | 58/345 [00:09<00:35, 8.08it/s]#015 17%|█▋ | 59/345 [00:09<00:35, 8.02it/s]#015 17%|█▋ | 60/345 [00:09<00:35, 8.04it/s]#015 18%|█▊ | 61/345 [00:09<00:35, 7.95it/s]#015 18%|█▊ | 62/345 [00:09<00:35, 7.93it/s]#015 18%|█▊ | 63/345 [00:09<00:35, 7.97it/s]#015 19%|█▊ | 64/345 [00:09<00:35, 8.00it/s]#015 19%|█▉ | 65/345 [00:09<00:35, 7.99it/s]#015 19%|█▉ | 66/345 [00:10<00:34, 8.02it/s]#015 19%|█▉ | 67/345 [00:10<00:34, 8.04it/s]#015 20%|█▉ | 68/345 [00:10<00:34, 8.06it/s]#015 20%|██ | 69/345 [00:10<00:34, 8.08it/s]#015 20%|██ | 70/345 [00:10<00:34, 8.08it/s]#015 21%|██ | 71/345 [00:10<00:33, 8.07it/s]#015 21%|██ | 72/345 [00:10<00:33, 8.07it/s]#015 21%|██ | 73/345 [00:10<00:33, 8.03it/s]#015 21%|██▏ | 74/345 [00:11<00:33, 8.01it/s]#015 22%|██▏ | 75/345 [00:11<00:33, 8.03it/s]#015 22%|██▏ | 76/345 [00:11<00:33, 8.04it/s]#015 22%|██▏ | 77/345 [00:11<00:33, 8.05it/s]#015 23%|██▎ | 78/345 [00:11<00:33, 8.06it/s]#015 23%|██▎ | 79/345 [00:11<00:33, 8.06it/s]#015 23%|██▎ | 80/345 [00:11<00:32, 8.07it/s]#015 23%|██▎ | 81/345 [00:11<00:32, 8.07it/s]#015 24%|██▍ | 82/345 [00:12<00:32, 8.07it/s]#015 24%|██▍ | 83/345 [00:12<00:32, 8.08it/s]#015 24%|██▍ | 84/345 [00:12<00:32, 8.08it/s]#015 25%|██▍ | 85/345 [00:12<00:32, 8.09it/s]#015 25%|██▍ | 86/345 [00:12<00:32, 8.08it/s]#015 25%|██▌ | 87/345 [00:12<00:31, 8.08it/s]#015 26%|██▌ | 88/345 [00:12<00:31, 8.08it/s]#015 26%|██▌ | 89/345 [00:12<00:31, 8.10it/s]#015 26%|██▌ | 90/345 [00:13<00:31, 8.09it/s]#015 26%|██▋ | 91/345 [00:13<00:31, 8.08it/s]#015 27%|██▋ | 92/345 [00:13<00:31, 8.08it/s]#015 27%|██▋ | 93/345 [00:13<00:31, 8.08it/s]#015 27%|██▋ | 94/345 [00:13<00:31, 8.08it/s]#015 28%|██▊ | 95/345 [00:13<00:30, 8.09it/s]#015 28%|██▊ | 96/345 [00:13<00:30, 8.08it/s]#015 28%|██▊ | 97/345 [00:13<00:30, 8.09it/s]#015 28%|██▊ | 98/345 [00:14<00:30, 8.09it/s]#015 29%|██▊ | 99/345 [00:14<00:30, 8.08it/s]#015 29%|██▉ | 100/345 [00:14<00:30, 8.08it/s]#015 29%|██▉ | 101/345 [00:14<00:30, 8.09it/s]#015 30%|██▉ | 102/345 [00:14<00:30, 7.97it/s]#015 30%|██▉ | 103/345 [00:14<00:30, 7.98it/s]#015 30%|███ | 104/345 [00:14<00:30, 7.99it/s]#015 30%|███ | 105/345 [00:14<00:30, 7.99it/s]#015 31%|███ | 106/345 [00:15<00:29, 7.99it/s]#015 31%|███ | 107/345 [00:15<00:29, 8.00it/s]#015 31%|███▏ | 108/345 [00:15<00:29, 8.01it/s]#015 32%|███▏ | 109/345 [00:15<00:29, 8.02it/s]#015 32%|███▏ | 110/345 [00:15<00:29, 8.01it/s]#015 32%|███▏ | 111/345 [00:15<00:29, 8.00it/s]#015 32%|███▏ | 112/345 [00:15<00:29, 8.00it/s]#015 33%|███▎ | 113/345 [00:15<00:28, 8.00it/s]#015 33%|███▎ | 114/345 [00:16<00:28, 8.00it/s]#015 34%|███▎ | 116/345 [00:16<00:27, 8.39it/s]#015 34%|███▍ | 117/345 [00:16<00:27, 8.30it/s]#015 34%|███▍ | 118/345 [00:16<00:27, 8.24it/s]#015 34%|███▍ | 119/345 [00:16<00:27, 8.19it/s]#015 35%|███▍ | 120/345 [00:16<00:27, 8.12it/s]#015 35%|███▌ | 121/345 [00:16<00:27, 8.11it/s]#015 35%|███▌ | 122/345 [00:16<00:27, 8.10it/s]#015 36%|███▌ | 123/345 [00:17<00:27, 8.09it/s]#015 36%|███▌ | 124/345 [00:17<00:27, 8.09it/s]#015 36%|███▌ | 125/345 [00:17<00:27, 8.09it/s]#015 37%|███▋ | 126/345 [00:17<00:27, 8.10it/s]#015 37%|███▋ | 127/345 [00:17<00:26, 8.09it/s]#015 37%|███▋ | 128/345 [00:17<00:26, 8.04it/s]#015 37%|███▋ | 129/345 [00:17<00:26, 8.04it/s]#015 38%|███▊ | 130/345 [00:17<00:26, 8.05it/s]#015 38%|███▊ | 131/345 [00:18<00:26, 8.06it/s]#015 38%|███▊ | 132/345 [00:18<00:26, 8.06it/s]#015 39%|███▊ | 133/345 [00:18<00:26, 8.06it/s]#015 39%|███▉ | 134/345 [00:18<00:26, 8.06it/s]#015 39%|███▉ | 135/345 [00:18<00:26, 8.06it/s]#015 39%|███▉ | 136/345 [00:18<00:25, 8.07it/s]#015 40%|███▉ | 137/345 [00:18<00:25, 8.06it/s]#015 40%|████ | 138/345 [00:18<00:25, 8.06it/s]#015 40%|████ | 139/345 [00:19<00:25, 8.05it/s]#015 41%|████ | 140/345 [00:19<00:25, 8.07it/s]#015 41%|████ | 141/345 [00:19<00:25, 8.08it/s]#015 41%|████ | 142/345 [00:19<00:25, 8.09it/s]#015 41%|████▏ | 143/345 [00:19<00:24, 8.09it/s]#015 42%|████▏ | 144/345 [00:19<00:24, 8.10it/s]#015 42%|████▏ | 145/345 [00:19<00:24, 8.10it/s]#015 42%|████▏ | 146/345 [00:19<00:24, 8.10it/s]#015 43%|████▎ | 147/345 [00:20<00:24, 8.10it/s]#015 43%|████▎ | 148/345 [00:20<00:24, 8.11it/s]#015 43%|████▎ | 149/345 [00:20<00:24, 8.12it/s]#015 43%|████▎ | 150/345 [00:20<00:24, 8.12it/s]#015 44%|████▍ | 151/345 [00:20<00:23, 8.12it/s]#015 44%|████▍ | 152/345 [00:20<00:23, 8.13it/s]#015 44%|████▍ | 153/345 [00:20<00:23, 8.11it/s]#015 45%|████▍ | 154/345 [00:20<00:23, 8.11it/s]#015 45%|████▍ | 155/345 [00:21<00:23, 8.03it/s]#015 45%|████▌ | 156/345 [00:21<00:23, 8.05it/s]#015 46%|████▌ | 157/345 [00:21<00:23, 8.07it/s]#015 46%|████▌ | 158/345 [00:21<00:23, 8.08it/s]#015 46%|████▌ | 159/345 [00:21<00:22, 8.09it/s]#015 46%|████▋ | 160/345 [00:21<00:22, 8.10it/s]#015 47%|████▋ | 161/345 [00:21<00:22, 8.11it/s]#015 47%|████▋ | 162/345 [00:21<00:22, 8.10it/s]#015 47%|████▋ | 163/345 [00:22<00:22, 7.95it/s]#015 48%|████▊ | 164/345 [00:22<00:23, 7.75it/s]#015 48%|████▊ | 165/345 [00:22<00:23, 7.68it/s]#015 48%|████▊ | 166/345 [00:22<00:23, 7.74it/s]#015 48%|████▊ | 167/345 [00:22<00:22, 7.81it/s]#015 49%|████▊ | 168/345 [00:22<00:22, 7.86it/s]#015 49%|████▉ | 169/345 [00:22<00:22, 7.89it/s]#015 49%|████▉ | 170/345 [00:22<00:22, 7.93it/s]#015 50%|████▉ | 171/345 [00:23<00:21, 7.93it/s]#015 50%|████▉ | 172/345 [00:23<00:21, 7.98it/s]#015 50%|█████ | 173/345 [00:23<00:21, 8.03it/s]#015 50%|█████ | 174/345 [00:23<00:21, 8.05it/s]#015 51%|█████ | 175/345 [00:23<00:21, 8.08it/s]#015 51%|█████ | 176/345 [00:23<00:20, 8.09it/s]#015 51%|█████▏ | 177/345 [00:23<00:20, 8.10it/s]#015 52%|█████▏ | 178/345 [00:23<00:20, 8.10it/s]#015 52%|█████▏ | 179/345 [00:24<00:20, 8.09it/s]#015 52%|█████▏ | 180/345 [00:24<00:20, 8.10it/s]#015 52%|█████▏ | 181/345 [00:24<00:20, 8.10it/s]#015 53%|█████▎ | 182/345 [00:24<00:20, 8.09it/s]#015 53%|█████▎ | 183/345 [00:24<00:20, 8.07it/s]#015 53%|█████▎ | 184/345 [00:24<00:19, 8.07it/s]#015 54%|█████▎ | 185/345 [00:24<00:19, 8.07it/s]#015 54%|█████▍ | 186/345 [00:24<00:19, 8.07it/s]#015 54%|█████▍ | 187/345 [00:25<00:19, 8.07it/s]#015 54%|█████▍ | 188/345 [00:25<00:19, 8.07it/s]#015 55%|█████▍ | 189/345 [00:25<00:19, 8.07it/s]#015 55%|█████▌ | 190/345 [00:25<00:19, 8.06it/s]#015 55%|█████▌ | 191/345 [00:25<00:19, 8.07it/s]#015 56%|█████▌ | 192/345 [00:25<00:18, 8.07it/s]#015 56%|█████▌ | 193/345 [00:25<00:18, 8.07it/s]#015 56%|█████▌ | 194/345 [00:25<00:18, 8.07it/s]#015 57%|█████▋ | 195/345 [00:26<00:18, 8.07it/s]#015 57%|█████▋ | 196/345 [00:26<00:18, 8.07it/s]#015 57%|█████▋ | 197/345 [00:26<00:18, 8.07it/s]#015 57%|█████▋ | 198/345 [00:26<00:18, 8.06it/s]#015 58%|█████▊ | 199/345 [00:26<00:18, 8.06it/s]#015 58%|█████▊ | 200/345 [00:26<00:17, 8.07it/s]#015 58%|█████▊ | 201/345 [00:26<00:17, 8.08it/s]#015 59%|█████▊ | 202/345 [00:26<00:17, 8.08it/s]#015 59%|█████▉ | 203/345 [00:27<00:17, 8.07it/s]#015 59%|█████▉ | 204/345 [00:27<00:17, 8.06it/s]#015 59%|█████▉ | 205/345 [00:27<00:17, 8.07it/s]#015 60%|█████▉ | 206/345 [00:27<00:17, 8.06it/s]#015 60%|██████ | 207/345 [00:27<00:17, 8.05it/s]#015 60%|██████ | 208/345 [00:27<00:17, 8.06it/s]#015 61%|██████ | 209/345 [00:27<00:16, 8.06it/s]#015 61%|██████ | 210/345 [00:27<00:16, 8.06it/s]#015 61%|██████ | 211/345 [00:28<00:16, 8.06it/s]#015 61%|██████▏ | 212/345 [00:28<00:16, 8.05it/s]#015 62%|██████▏ | 213/345 [00:28<00:16, 8.06it/s]#015 62%|██████▏ | 214/345 [00:28<00:16, 8.06it/s]#015 62%|██████▏ | 215/345 [00:28<00:16, 8.07it/s]#015 63%|██████▎ | 216/345 [00:28<00:15, 8.07it/s]#015 63%|██████▎ | 217/345 [00:28<00:15, 8.07it/s]#015 63%|██████▎ | 218/345 [00:28<00:15, 8.07it/s]#015 63%|██████▎ | 219/345 [00:29<00:15, 8.08it/s]#015 64%|██████▍ | 220/345 [00:29<00:15, 8.01it/s]#015 64%|██████▍ | 221/345 [00:29<00:15, 8.02it/s]#015 64%|██████▍ | 222/345 [00:29<00:15, 8.04it/s]#015 65%|██████▍ | 223/345 [00:29<00:15, 8.04it/s]#015 65%|██████▍ | 224/345 [00:29<00:15, 8.05it/s]#015 65%|██████▌ | 225/345 [00:29<00:14, 8.05it/s]#015 66%|██████▌ | 226/345 [00:29<00:14, 8.04it/s]#015 66%|██████▌ | 227/345 [00:30<00:14, 8.04it/s]#015 66%|██████▌ | 228/345 [00:30<00:14, 8.03it/s]#015 66%|██████▋ | 229/345 [00:30<00:14, 7.98it/s]#015 67%|██████▋ | 231/345 [00:30<00:13, 8.38it/s]#015 67%|██████▋ | 232/345 [00:30<00:13, 8.27it/s]#015 68%|██████▊ | 233/345 [00:30<00:13, 8.20it/s]#015 68%|██████▊ | 234/345 [00:30<00:13, 8.15it/s]#015 68%|██████▊ | 235/345 [00:30<00:13, 8.11it/s]#015 68%|██████▊ | 236/345 [00:31<00:13, 8.09it/s]#015 69%|██████▊ | 237/345 [00:31<00:13, 8.07it/s]#015 69%|██████▉ | 238/345 [00:31<00:13, 8.07it/s]#015 69%|██████▉ | 239/345 [00:31<00:13, 8.06it/s]#015 70%|██████▉ | 240/345 [00:31<00:13, 8.05it/s]#015 70%|██████▉ | 241/345 [00:31<00:12, 8.06it/s]#015 70%|███████ | 242/345 [00:31<00:12, 8.05it/s]#015 70%|███████ | 243/345 [00:31<00:12, 8.05it/s]#015 71%|███████ | 244/345 [00:32<00:12, 8.05it/s]#015 71%|███████ | 245/345 [00:32<00:12, 8.05it/s]#015 71%|███████▏ | 246/345 [00:32<00:12, 8.04it/s]#015 72%|███████▏ | 247/345 [00:32<00:12, 8.04it/s]#015 72%|███████▏ | 248/345 [00:32<00:12, 8.04it/s]#015 72%|███████▏ | 249/345 [00:32<00:11, 8.04it/s]#015 72%|███████▏ | 250/345 [00:32<00:11, 8.03it/s]#015 73%|███████▎ | 251/345 [00:32<00:11, 8.04it/s]#015 73%|███████▎ | 252/345 [00:33<00:11, 8.04it/s]#015 73%|███████▎ | 253/345 [00:33<00:11, 8.05it/s]#015 74%|███████▎ | 254/345 [00:33<00:11, 8.05it/s]#015 74%|███████▍ | 255/345 [00:33<00:11, 8.05it/s]#015 74%|███████▍ | 256/345 [00:33<00:11, 8.05it/s]#015 74%|███████▍ | 257/345 [00:33<00:10, 8.05it/s]#015 75%|███████▍ | 258/345 [00:33<00:10, 8.05it/s]#015 75%|███████▌ | 259/345 [00:33<00:10, 8.04it/s]#015 75%|███████▌ | 260/345 [00:34<00:10, 8.04it/s]#015 76%|███████▌ | 261/345 [00:34<00:10, 7.98it/s]#015 76%|███████▌ | 262/345 [00:34<00:10, 7.99it/s]#015 76%|███████▌ | 263/345 [00:34<00:10, 8.00it/s]#015 77%|███████▋ | 264/345 [00:34<00:10, 8.01it/s]#015 77%|███████▋ | 265/345 [00:34<00:10, 7.91it/s]#015 77%|███████▋ | 266/345 [00:34<00:10, 7.88it/s]#015 77%|███████▋ | 267/345 [00:34<00:09, 7.94it/s]#015 78%|███████▊ | 268/345 [00:35<00:09, 7.99it/s]#015 78%|███████▊ | 269/345 [00:35<00:09, 7.96it/s]#015 78%|███████▊ | 270/345 [00:35<00:09, 8.00it/s]#015 79%|███████▊ | 271/345 [00:35<00:09, 8.02it/s]#015 79%|███████▉ | 272/345 [00:35<00:09, 8.03it/s]#015 79%|███████▉ | 273/345 [00:35<00:08, 8.05it/s]#015 79%|███████▉ | 274/345 [00:35<00:08, 8.07it/s]#015 80%|███████▉ | 275/345 [00:35<00:08, 8.09it/s]#015 80%|████████ | 276/345 [00:36<00:08, 8.11it/s]#015 80%|████████ | 277/345 [00:36<00:08, 8.11it/s]#015 81%|████████ | 278/345 [00:36<00:08, 8.09it/s]#015 81%|████████ | 279/345 [00:36<00:08, 8.10it/s]#015 81%|████████ | 280/345 [00:36<00:08, 8.09it/s]#015 81%|████████▏ | 281/345 [00:36<00:07, 8.09it/s]#015 82%|████ ████▏ | 282/345 [00:36<00:07, 8.09it/s]#015 82%|████████▏ | 283/345 [00:36<00:07, 8.10it/s]#015 82%|████████▏ | 284/345 [00:37<00:07, 8.11it/s]#015 83%|████████▎ | 285/345 [00:37<00:07, 8.11it/s]#015 83%|████████▎ | 286/345 [00:37<00:07, 8.11it/s]#015 83%|████████▎ | 287/345 [00:37<00:07, 8.12it/s]#015 83%|████████▎ | 288/345 [00:37<00:07, 8.11it/s]#015 84%|████████▍ | 289/345 [00:37<00:06, 8.11it/s]#015 84%|████████▍ | 290/345 [00:37<00:06, 8.12it/s]#015 84%|████████▍ | 291/345 [00:37<00:06, 8.11it/s]#015 85%|████████▍ | 292/345 [00:38<00:06, 8.11it/s]#015 85%|████████▍ | 293/345 [00:38<00:06, 8.12it/s]#015 85%|████████▌ | 294/345 [00:38<00:06, 8.10it/s]#015 86%|████████▌ | 295/345 [00:38<00:06, 8.10it/s]#015 86%|████████▌ | 296/345 [00:38<00:06, 8.10it/s]#015 86%|████████▌ | 297/345 [00:38<00:05, 8.11it/s]#015 86%|████████▋ | 298/345 [00:38<00:05, 8.12it/s]#015 87%|████████▋ | 299/345 [00:38<00:05, 8.11it/s]#015 87%|████████▋ | 300/345 [00:39<00:05, 8.11it/s]#015 87%|████████▋ | 301/345 [00:39<00:05, 8.11it/s]#015 88%|████████▊ | 302/345 [00:39<00:05, 8.09it/s]#015 88%|████████▊ | 303/345 [00:39<00:05, 7.98it/s]#015 88%|████████▊ | 304/345 [00:39<00:05, 8.01it/s]#015 88%|████████▊ | 305/345 [00:39<00:04, 8.04it/s]#015 89%|████████▊ | 306/345 [00:39<00:04, 7.92it/s]#015 89%|████████▉ | 307/345 [00:39<00:04, 7.97it/s]#015 89%|████████▉ | 308/345 [00:40<00:04, 8.00it/s]#015 90%|████████▉ | 309/345 [00:40<00:04, 8.03it/s]#015 90%|████████▉ | 310/345 [00:40<00:04, 8.04it/s]#015 90%|█████████ | 311/345 [00:40<00:04, 8.05it/s]#015 90%|█████████ | 312/345 [00:40<00:04, 8.05it/s]#015 91%|█████████ | 313/345 [00:40<00:04, 7.98it/s]#015 91%|█████████ | 314/345 [00:40<00:03, 8.01it/s]#015 91%|█████████▏| 315/345 [00:40<00:03, 8.02it/s]#015 92%|█████████▏| 316/345 [00:41<00:03, 8.04it/s]#015 92%|█████████▏| 317/345 [00:41<00:03, 8.05it/s]#015 92%|█████████▏| 318/345 [00:41<00:03, 8.00it/s]#015 92%|█████████▏| 319/345 [00:41<00:03, 8.03it/s]#015 93%|█████████▎| 320/345 [00:41<00:03, 8.04it/s]#015 93%|█████████▎| 321/345 [00:41<00:02, 8.06it/s]#015 93%|█████████▎| 322/345 [00:41<00:02, 8.07it/s]#015 94%|█████████▎| 323/345 [00:41<00:02, 8.05it/s]#015 94%|█████████▍| 324/345 [00:42<00:02, 8.06it/s]#015 94%|█████████▍| 325/345 [00:42<00:02, 8.08it/s]#015 94%|█████████▍| 326/345 [00:42<00:02, 8.07it/s]#015 95%|█████████▍| 327/345 [00:42<00:02, 8.03it/s]#015 95%|█████████▌| 328/345 [00:42<00:02, 8.05it/s]#015 95%|█████████▌| 329/345 [00:42<00:01, 8.07it/s]#015 96%|█████████▌| 330/345 [00:42<00:01, 8.09it/s]#015 96%|█████████▌| 331/345 [00:42<00:01, 8.09it/s]#015 96%|█████████▌| 332/345 [00:43<00:01, 8.09it/s]#015 97%|█████████▋| 333/345 [00:43<00:01, 8.09it/s]#015 97%|█████████▋| 334/345 [00:43<00:01, 8.10it/s]#015 97%|█████████▋| 335/345 [00:43<00:01, 8.05it/s]#015 97%|█████████▋| 336/345 [00:43<00:01, 8.03it/s]#015 98%|█████████▊| 337/345 [00:43<00:00, 8.03it/s]#015 98%|█████████▊| 338/345 [00:43<00:00, 8.04it/s]#015 98%|█████████▊| 339/345 [00:43<00:00, 8.04it/s]#015 99%|█████████▊| 340/345 [00:44<00:00, 8.04it/s]#015 99%|█████████▉| 341/345 [00:44<00:00, 8.04it/s]#015 99%|█████████▉| 342/345 [00:44<00:00, 8.02it/s]#015 99%|█████████▉| 343/345 [00:44<00:00, 8.01it/s]#015100%|█████████▉| 344/345 [00:44<00:00, 8.01it/s][INFO|trainer.py:862] 2020-12-31 08:29:28,297 >> Training completed. Do not forget to share your model on huggingface.co/models =) #015 #015#015100%|██████████| 345/345 [00:44<00:00, 8.01it/s]#015100%|██████████| 345/345 [00:44<00:00, 7.73it/s] [INFO|trainer.py:1226] 2020-12-31 08:29:28,298 >> Saving model checkpoint to /opt/ml/model [INFO|configuration_utils.py:289] 2020-12-31 08:29:28,300 >> Configuration saved in /opt/ml/model/config.json [INFO|modeling_utils.py:814] 2020-12-31 08:29:28,950 >> Model weights saved in /opt/ml/model/pytorch_model.bin 12/31/2020 08:29:28 - INFO - __main__ - ***** Train results ***** 12/31/2020 08:29:28 - INFO - __main__ - global_step = 345 12/31/2020 08:29:28 - INFO - __main__ - training_loss = 0.4789575106855752 12/31/2020 08:29:28 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:388] 2020-12-31 08:29:28,986 >> The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence2, idx, sentence1. [INFO|trainer.py:1412] 2020-12-31 08:29:28,987 >> ***** Running Evaluation ***** [INFO|trainer.py:1413] 2020-12-31 08:29:28,987 >> Num examples = 408 [INFO|trainer.py:1414] 2020-12-31 08:29:28,987 >> Batch size = 8 #015 0%| | 0/51 [00:00<?, ?it/s]#015 18%|█▊ | 9/51 [00:00<00:00, 80.14it/s]#015 33%|███▎ | 17/51 [00:00<00:00, 77.98it/s]#015 49%|████▉ | 25/51 [00:00<00:00, 76.58it/s]#015 65%|██████▍ | 33/51 [00:00<00:00, 75.53it/s]#015 80%|████████ | 41/51 [00:00<00:00, 74.76it/s]#015 96%|█████████▌| 49/51 [00:00<00:00, 74.40it/s]12/31/2020 08:29:29 - INFO - /opt/conda/lib/python3.6/site-packages/datasets/metric.py - Removing /root/.cache/huggingface/metrics/glue/mrpc/default_experiment-1-0.arrow #015100%|██████████| 51/51 [00:00<00:00, 72.39it/s] 12/31/2020 08:29:29 - INFO - __main__ - ***** Eval results mrpc ***** 12/31/2020 08:29:29 - INFO - __main__ - epoch = 3.0 12/31/2020 08:29:29 - INFO - __main__ - eval_accuracy = 0.7892156862745098 12/31/2020 08:29:29 - INFO - __main__ - eval_combined_score = 0.8183667083854819 12/31/2020 08:29:29 - INFO - __main__ - eval_f1 = 0.847517730496454 12/31/2020 08:29:29 - INFO - __main__ - eval_loss = 0.4569968283176422 2020-12-31 08:29:40 Uploading - Uploading generated training model 2020-12-31 08:30:16 Completed - Training job completed Training seconds: 357 Billable seconds: 357 ``` </details> For local testing you can ran this script. It will add all the required Sagemaker environment variables to the script. ```bash export TASK_NAME=mrpc export SM_CHANNELS=["test","train"] export SM_OUTPUT_DATA_DIR=/opt/ml/output/data export SM_MODEL_DIR=/opt/ml/model export M_CHANNEL_TEST=/opt/ml/input/data/test export SM_CHANNEL_TRAIN=/opt/ml/input/data/train python ../../transformers/examples/text-classification/run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train True \ --do_eval True \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ ``` I would love to receive suggestions for improvement. If it looks okay for you I would move the `is_run_on_sagemaker()` to the correct path and we could merge it. ~~P.S. i also added a fix for the `train_result.metrics` https://discuss.huggingface.co/t/attributeerror-trainoutput-object-has-no-attribute-metrics-when-finetune-custom-dataset/2970~~ mistake from my site
12-31-2020 08:59:41
12-31-2020 08:59:41
@sgugger's 1st point is a good point and we should probably either: - recommend that users `git clone` the same version of `transformers` as is installed in the DLC image. - or even find a way to bundle the scripts themselves (or their capabilities) in the image itself, kinda like what I was suggesting before: ```python estimator = HuggingFace( task_name="text-classification", dataset="imdb", from_model="distilbert-base-cased", publish_model="my-fine-tuned-model", huggingface_token="...", ) ``` (then there's not even a need to have a free-standing script. My question on whether this is SageMaker-idiomatic still stands)<|||||>That´s true both of you are right. We must be able to ensure that the correct script version is used for the correct transformers & datasets version within the container image. I would not bundle them into the official DLC container since there is always that need to have an `entry_point`. My idea is maybe we still could use `task_name="text-classification"` as "entry_point" and in the background, we can clone/get the correct script using the transformers version and the Github tags. So for this version, we could use the script from https://github.com/huggingface/transformers/tree/v4.1.1. <|||||>Closed by stale bot. If this shouldn't have been closed, let me know.
transformers
9,366
closed
How to implement seq2seq attention mask conviniently?
BERT's attention mask is square, GPT's attention mask is triangular. How to implement seq2seq attention mask with transformers package conviniently? like the one appears in UniLM, a triangle concatenates a rectangle. ![unilm](https://user-images.githubusercontent.com/49787234/103397155-ff354180-4b71-11eb-8283-1c0f50f5b462.jpg)
12-31-2020 06:14:52
12-31-2020 06:14:52
The image is also on this link. https://img-blog.csdnimg.cn/20191025102941935.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9qYWNrY3VpLmJsb2cuY3Nkbi5uZXQ=,size_16,color_FFFFFF,t_70<|||||>Hey @zhizeng8, In the future, it would be nice if such questions are posted in the forum: https://discuss.huggingface.co/ as it is not about a bug. To answer your question I'd use something like the following ```python import torch tgt_len = 5 # make causal mask mask = torch.full((tgt_len, tgt_len), float("-inf")) mask_cond = torch.arange(mask.size(-1)) mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0) # attend to encoder part mask[:, :3] = 0 ``` This mask can however not just be input as an `attention_mask` to transformer models. Because BERT accepts 3d masks however with the 0-th index being the batch_size the above mask could be extended for all batches and input to BERT: ```python 3d_attention_mask = mask[None, :, :] bert = BertModel.from_pretrained(...) bert(input_ids, attention_mask=3d_attention_mask) ```<|||||>Thank you very much! I think in my case the 3d_attention_mask is different for each instance in a batch, due to the different length of source and target sequence.<|||||>I have been reading this article recently. BertModel accepts 3d masks of dimensions [batch_size, from_seq_length, to_seq_length]. ``` text='I love you' attention_mask = [[[1,0,0],[1,1,0],[1,1,1]]] ``` Maybe can help you. Also I have a question. ``` text='I love you' attention_mask = [[[1,1,1],[1,1,1],[1,1,1]]] ``` I found tensor of the last word 'you' are not same, I don't know the reason.<|||||>I know. Last word's tensor in first attention layer is same, but there are 12 attention layer. Hiddenstate may change.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,365
closed
Multi turn conversation with Blender Bot
# 🚀 Feature request Hi there. is there any way to predict a response using multi-turn dialog context for the Blender Bot Model. From your [example](https://huggingface.co/transformers/model_doc/blenderbot.html) I saw that it only use single-turn context. I tried to use a ` </sep>` token to separate human/bot turn like the following example: ``` Human: I am from Vietnam Bot: I've never been there, but I've always wanted to go. How do you like it? Human: pretty good actually , where you are from ? ``` Concatenate input: `I am from Vietnam</sep> I've never been there, but I've always wanted to go. How do you like it?</sep> pretty good actually , where you are from ?` huggingface's model response: `I am from the United States. I have never been to Vietnam, but I have always wanted to go.` Facebook_ParlAI's model response: `I'm from the United States. I've heard it's a great place to visit, though.`
12-31-2020 01:14:11
12-31-2020 01:14:11
Hi @mailong25 for blenderbot the dialogs are separated by the newline `\n`. So the text should be `I am from Vietnam\nI've never been there, but I've always wanted to go. How do you like it?\npretty good actually , where you are from ?` which model are you using, 90M or 3B? Also, could you post the `parlai` command that you used?<|||||>Thanks for a quick response. I use the `facebook/blenderbot-1B-distill` model For parlai, I use the cmd: `python parlai/scripts/interactive.py -t blended_skill_talk -mf zoo:blender/blender_1Bdistill/model --include_personas=False`<|||||>I tried to use the `'\n'` separator with `blenderbot-1B` and `blender_1Bdistill` and the results are still the same with ` </sep>`, which are different than parlai version. Also, when I tried to move the model and the input sentence to "cuda", the following errors occur: ``` import os os.environ["TRANSFORMERS_CACHE"] = '/mnt/disks/blender/' from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration mname = 'facebook/blenderbot-3B' model = BlenderbotForConditionalGeneration.from_pretrained(mname) tokenizer = BlenderbotTokenizer.from_pretrained(mname) model.to('cuda') import torch with torch.no_grad(): UTTERANCE = [] UTTERANCE.append("I am from Vietnam") UTTERANCE.append("I've never been there, but I've always wanted to go. How do you like it?") UTTERANCE.append("pretty good actually , where you are from ?") UTTERANCE = '\n'.join(UTTERANCE) print(UTTERANCE) inputs = tokenizer([UTTERANCE], return_tensors='pt') reply_ids = model.generate(**inputs) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in reply_ids]) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-9-dff8c43ffc48> in <module> 10 print(UTTERANCE) 11 inputs = tokenizer([UTTERANCE], return_tensors='pt') ---> 12 reply_ids = model.generate(**inputs) 13 print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in reply_ids]) ~/.local/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 13 def decorate_context(*args, **kwargs): 14 with self: ---> 15 return func(*args, **kwargs) 16 return decorate_context 17 ~/.local/lib/python3.7/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, **model_kwargs) 501 if self.config.is_encoder_decoder: 502 # add encoder_outputs to model_kwargs --> 503 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs) 504 505 # set input_ids as decoder_input_ids ~/.local/lib/python3.7/site-packages/transformers/generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs) 84 argument: value for argument, value in model_kwargs.items() if not argument.startswith("decoder_") 85 } ---> 86 model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs) 87 return model_kwargs 88 ~/.local/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/.local/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py in forward(self, input_ids, attention_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict) 750 751 if inputs_embeds is None: --> 752 inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale 753 754 embed_pos = self.embed_positions(input_shape) ~/.local/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/.local/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input) 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, --> 126 self.norm_type, self.scale_grad_by_freq, self.sparse) 127 128 def extra_repr(self) -> str: ~/.local/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1812 # remove once script supports set_grad_enabled 1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1815 1816 RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select ```<|||||>The error occurs because the `inputs` is not on GPU, putting `inputs` on GPU should fix the error. <|||||>Hi. I tried to use the blenderbot example code from huggingface. I just copy pasted it in colab and ran it. But it is showing me the following error : TypeError: forward() got an unexpected keyword argument 'token_type_ids' Please help me out. What can be done to solve this?<|||||>According to the Parlai [documentation](https://parl.ai/docs/tutorial_task.html), the Parlai format is to use a `<\t>` or 4 spaces as a separator of turns in a conversation. I ran your example using 4 spaces and got the Parlai response ``` from transformers import BlenderbotForConditionalGeneration, BlenderbotTokenizer MODEL_ID = "facebook/blenderbot-400M-distill" model = BlenderbotForConditionalGeneration.from_pretrained(MODEL_ID) tokenizer = BlenderbotTokenizer.from_pretrained(MODEL_ID) text = ["I am from Vietnam I've never been there, but I've always wanted to go. How do you like it? pretty good actually , where you are from ?"] inputs = tokenizer(text, return_tensors='pt') res = model.generate(inputs['input_ids']) tokenizer.batch_decode(res) #["<s> I'm from the United States. I've heard it's a beautiful place to visit. </s>"] ``` <|||||>> According to the Parlai [documentation](https://parl.ai/docs/tutorial_task.html), the Parlai format is to use a `<\t>` or 4 spaces as a separator of turns in a conversation. > I ran your example using 4 spaces and got the Parlai response > > ``` > from transformers import BlenderbotForConditionalGeneration, BlenderbotTokenizer > > MODEL_ID = "facebook/blenderbot-400M-distill" > model = BlenderbotForConditionalGeneration.from_pretrained(MODEL_ID) > tokenizer = BlenderbotTokenizer.from_pretrained(MODEL_ID) > > text = ["I am from Vietnam I've never been there, but I've always wanted to go. How do you like it? pretty good actually , where you are from ?"] > > inputs = tokenizer(text, return_tensors='pt') > res = model.generate(inputs['input_ids']) > tokenizer.batch_decode(res) > > #["<s> I'm from the United States. I've heard it's a beautiful place to visit. </s>"] > ``` I did a whole bunch of testing with different turn separation tokens, and the only one that consistently separated the turns and created outputs that made sense was 4 spaces. Everything else would sometimes separate a turn, but sometimes not (tab, new line, `</s> <s>`, two spaces) The documentation there mentioned that tabs are rendered as 4 spaces in the browser, but it still should be tab as a separator? Any new insights into why?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Was there a finalized way to construct inputs for Blenderbot? According to this https://github.com/huggingface/transformers/blob/v4.5.1/src/transformers/models/blenderbot/modeling_blenderbot.py#L499 should be using `</s> <s>` not sure if I should be following this or `\n` as indicated in parlai<|||||>I have tried four spaces, \n and \t. Four spaces gave the best results. <|||||>Should the first sentence in the encoder input be prepended with an extra space ' '? Because I note that the first token generated by the decoder has the space prefix (e.g., ' I' or ' yes').<|||||>I noticed the space prefix in the generation too. But I didn't check if adding extra space to encoder input gives better results.<|||||>Check out this example. This example is crafted in such a way that it could be a 1 turn or 2 turn depend on how you separate it. I am using the 3B model. ``` Using \n as seperator UTTERANCE = "I am from tokyo\nwhere you are from?" inputs = tokenizer([UTTERANCE], return_tensors="pt") reply_ids = model.generate(**inputs) print(tokenizer.batch_decode(reply_ids,skip_special_tokens=True)) #[' I was born and raised in Tokyo, the capital of Japan. How about you?'] Using four spaces as seperator UTTERANCE = "I am from tokyo where you are from?" inputs = tokenizer([UTTERANCE], return_tensors="pt") reply_ids = model.generate(**inputs) print(tokenizer.batch_decode(reply_ids,skip_special_tokens=True)) #[" I'm from Tokyo. It's the capital of Japan. It's a big city"] Using <t> as seperator UTTERANCE = "I am from tokyo<t>where you are from?" inputs = tokenizer([UTTERANCE], return_tensors="pt") reply_ids = model.generate(**inputs) print(tokenizer.batch_decode(reply_ids,skip_special_tokens=True)) #[" I'm from the United States. I've never been to Tokyo, but I've always wanted to go."] # Using </s> <s> as seperator ​UTTERANCE = "I am from tokyo</s> <s>where you are from?" inputs = tokenizer([UTTERANCE], return_tensors="pt") reply_ids = model.generate(**inputs) print(tokenizer.batch_decode(reply_ids,skip_special_tokens=True)) #[' I am also from Tokyo, the capital and most populous metropolitan area in Japan.'] ``` It looks like using ```\n``` and four spaces, the model interpret it as 2 turn but 1 turn for using ```\<t> and \</s> \<s>```
transformers
9,364
closed
Finetune mbart rouge score difference between training and evaluation part
### Environment info - transformers from source - google colab ### Information When I use the `sshleifer/student_cnn_12_6` model for `finetune_trainer.py` and then I run finetuned model in `run_eval.py`, I can get close and high rouge scores.However when I give `facebook/mbart-large-cc25` as model and tokenizer to `finetune_trainer.py` and also, I used the same `LID` for both eos and bos , the generated text and rouge score produced by the `finetune_trainer.py` evaluation and prediction section are not good, and when I run the fine-tuned model with the `run_eval.py` rouge scores are very close to 0. What could be reason for low rouge scores and particularly the rouge score difference between training and evaluation when using mbart model. `Finetune_trainer.py` arguments >!python /content/transformers/examples/seq2seq/finetune_trainer.py --model_name_or_path facebook/mbart-large-cc25 \ --tokenizer_name facebook/mbart-large-cc25 \ --data_dir /content/transformers/cnn_dm_tr \ --output_dir finetuned_model --overwrite_output_dir \ --learning_rate=3e-5 \ --warmup_steps 500 --sortish_sampler \ --fp16 \ --n_val 500 \ --freeze_encoder --freeze_embeds \ --src_lang tr_TR --tgt_lang tr_TR \ --gradient_accumulation_steps=1 \ --per_device_train_batch_size=4 --per_device_eval_batch_size=4 \ --num_train_epochs=2 \ --save_steps 3000 --eval_steps 3000 \ --logging_first_step \ --max_target_length 56 --val_max_target_length 142 --test_max_target_length 142 \ --do_train --do_eval --do_predict \ --evaluation_strategy steps \ --predict_with_generate --sortish_sampler \ "$@" `Finetune_trainer.py` results >test_loss = 7.9716 test_rouge1 = 5.6445 test_rouge2 = 1.6458 test_rougeL = 4.8763 test_rougeLsum = 5.3712 >val_loss = 7.9894 val_rouge1 = 5.0368 val_rouge2 = 1.6249 val_rougeL = 4.1304 val_rougeLsum = 4.6041 `Run_eval.py` arguments >!python /content/transformers/examples/seq2seq/run_eval.py /content/finetuned_model \ /content/transformers/cnn_dm_tr/test.source \ dbart_cnn_12_6_test_gens.txt \ --reference_path /content/transformers/cnn_dm_tr/test.target \ --score_path dbart_cnn_12_6_test_rouge.json \ --n_obs 100 \ --task summarization --bs 2 --fp16 `Run_eval.py` results >{'rouge1': 0.1091, 'rouge2': 0.0, 'rougeL': 0.1091, 'rougeLsum': 0.1091, 'n_obs': 50, 'runtime': 601, 'seconds_per_sample': 12.02} ### Who can help @patil-suraj @sshleifer ### Expected behavior Be able to get high and close rouge scores for `run_eval.py` and `finetune_trainer.py` when I use `mbart-large-cc25` as model and tokenizer.
12-30-2020 20:48:27
12-30-2020 20:48:27
Hi @Eymen3455 In the `run_eval` command I see that you are setting `n_obs` to 100 while the `finetune_trainer` uses all test examples, could you maybe run eval again with all test examples and see if you get close results?<|||||>First of all, thank you very much @patil-suraj for your reply. I did as you said and used all the test dataset for `n_obs`, but the result remained unchanged. In addition, when we examine the texts produced in `finetune_trainer.py` and `run_eval.py`, the texts produced in `finetune_trainer.py` are acceptable. While the texts produced in `run_eval.py` are different, it is worse and the rouge score is less than half of obtained scores in `finetune_trainer.py`. Then when I saw this issue (https://github.com/huggingface/transformers/issues/9236), I used Xsum dataset as dataset and rouge score increased in `run_eval.py`. Just changing the dataset caused to get logical text in `run_eval.py`, but still I could not understand the difference between the rouge score and generated texts in `finetune_trainer.py` and `run_eval.py`. I do not understand what has affected the dataset so much. When I use the mbart model, why can't I get the same success in `finetune_train.py` and `run_eval.py`, what could be the reason for this? By the way, when I try the `sshleifer/student_cnn_12_6` model instead of the mbart model, I can achieve exactly the same success in `finetune_trainer.py` and `run_eval.py`. I would appreciate if you could help. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>I ran into the same problem and positioned it to the max_length parameters in Seq2SeqTrainer.evaluate() . Whether to set max_length parameters will result in different results.<|||||>I have encountered the same issue: val_rouge2 obtained during training is different from scores I got with "run_eval.py" script do you have any suggestions ? @patil-suraj
transformers
9,363
closed
Make sure to use return dict for the encoder call inside RagTokenForGeneration
## What does this PR do? At some point, `return_dict` was set to be `False` by default inside BART. However, this created a type error inside `RagTokenForGeneration`, which was still written with the expectation that `return_dict=True` by default. This PR simply adds `return_dict=True` to the call to the BART encoder inside the `RagTokenForGeneration` code. ## Tests I did not create new tests because this change is very minor. I did run all the existing tests and they pass. ## Who can review? Anyone can review, but I tagged these two because this involves RAG and BART: @patrickvonplaten @lhoestq
12-30-2020 19:40:13
12-30-2020 19:40:13
Thanks!
transformers
9,362
closed
Jupyter Notebook Kernel crashes when tokenizing large dataset
## Environment info I am using 2 setups, my personal laptop and a cluster. My laptop has this environment : `transformers` version: 4.1.1 - Platform: Darwin-20.2.0-x86_64-i386-64bit - Python version: 3.7.6 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No The cluster has this : - `transformers` version: 4.1.1 - Platform: Linux-3.10.0-1160.6.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core - Python version: 3.7.1 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information Model I am using (Bert, XLNet ...): DistilBERT The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) I followed [this example](https://huggingface.co/transformers/custom_datasets.html?highlight=custom%20datasets), but I modified the dataset part to include the one that I am using, which is described below. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) http://help.sentiment140.com/for-students ## To reproduce Steps to reproduce the behavior: 1. Use this script ```python from transformers import DistilBertTokenizerFast from sklearn.model_selection import train_test_split import pandas as pd # Run these first : # $ wget http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip # $ unzip trainingandtestdata.zip -d ./data # $ rm trainingandtestdata.zip def get_data(path): # Read the dataset df = pd.read_csv(path, encoding='ISO-8859-1', header=None, nrows=None) # Keep only the label and the text, replace 4 with 1 # Note: there are actually no neutral labels in the train dataset df = df[[0, 5]].replace(2, 1).replace(4, 1) # Rename df = df.rename(columns={0: "label", 5: "text"}) return df dftrain = get_data('data/training.1600000.processed.noemoticon.csv') dftest = get_data('data/testdata.manual.2009.06.14.csv') X_train = dftrain['text'].to_list() y_train = dftrain['label'].to_list() X_test = dftest['text'].to_list() y_test = dftest['label'].to_list() # Comment this to use full dataset # _,X_train, _, y_train = train_test_split(X_train, y_train, test_size=0.05, random_state=1) X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, test_size=0.25, random_state=1) tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') train_encodings = tokenizer(X_train, truncation=True, padding=True) # To check the memory used with open('output.txt', 'w') as f: print(str(train_encodings)) ``` 2. python tokenize_test.py ## Description In total, I tried this script in 4 different settings : 1. Personal laptop as a python script 2. Personal laptop in a Jupyter Notebook 3. Cluster as a python script 4. Cluster in a Jupyter Book In case 2, 4, the kernel died. I assume it is because of memory error since case 3 was killed as a process because I exceeded the memory usage. However, case 1 worked flawlessly. The training dataset is about 90 MB in total and weighs 1.6 GB after tokenization. My personal laptop has 16 GB of RAM and I reserve 4 GB of RAM in the cluster. The memory that ought to be used is clearly below the limit yet I still get memory issues. Maybe is there a memory leak in a specific version somewhere?
12-30-2020 16:01:16
12-30-2020 16:01:16
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,361
closed
DeBERTa in TF (TFAutoModel): unrecognized configuration class
## Environment info - `transformers` version: 4.1.1 - Platform: Linux-3.10.0-957.21.3.el7.x86_64-x86_64-with-centos-7.6.1810-Core - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik ## Information Model I am using: DeBERTa The problem arises when using: * [ x ] my own modified scripts: (give details below) The tasks I am working on is: * [ x ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Use TFAutoModel to import deberta-large ```python from transformers import TFAutoModel model = TFAutoModel.from_pretrained("microsoft/deberta-large") ``` Error: ```python --------------------------------------------------------------------------- ValueError <ipython-input-2-416d7de4fc12> in <module> ----> 1 model = TFAutoModel.from_pretrained("microsoft/deberta-large") ~/miniconda3/envs/hate2/lib/python3.6/site-packages/transformers/models/auto/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 583 "Unrecognized configuration class {} for this kind of TFAutoModel: {}.\n" 584 "Model type should be one of {}.".format( --> 585 config.__class__, cls.__name__, ", ".join(c.__name__ for c in TF_MODEL_MAPPING.keys()) 586 ) 587 ) ValueError: Unrecognized configuration class <class 'transformers.models.deberta.configuration_deberta.DebertaConfig'> for this kind of TFAutoModel: TFAutoModel. Model type should be one of LxmertConfig, MT5Config, T5Config, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, MobileBertConfig, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig, ElectraConfig, FunnelConfig, DPRConfig, MPNetConfig. ``` ## Expected behavior I should be able to import deberta-large and deberta-base using TFAutoModel, or the documentation should be updated to clarify that they are pytorch only. Thanks as always for the amazing software, and please let me know if I should provide any other details or otherwise help.
12-30-2020 15:25:01
12-30-2020 15:25:01
Hi @ck37 `DeBERTa` is currently PyTorch-only so it can't be loaded with `TFAutoModel`. The table on the doc's [homepage](https://huggingface.co/transformers/) shows whether the models have support in PyTorch, TensorFlow, and/or Flax.<|||||>Gotcha, thanks for the fast response. Do you think the TF side will be implemented at some point? It seems like there will be more interest in DeBERTa with it taking the lead in [SuperGLUE](https://super.gluebenchmark.com/leaderboard).<|||||>Yeah, it's pretty exciting! @patrickvonplaten might be able to give you eta of `TFDeberta`<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,360
closed
Loading a set of tokenized files for training
I have a directory of files that are already tokenized using a pretrained tokenizer. Each file is a pickle file containing a list of objects where each object corresponds to a text sequence containing input_ids and attention_masks. The directory has thousands of files. I'm looking for an efficient way to load the data for training using Trainer. Do I have to write my own Dataloader or do I create a custom dataset using Datasets? Thank you.
12-30-2020 14:54:23
12-30-2020 14:54:23
In this case, you could write a custom `dataset` that will read your pickle files and return the examples from `__getitem__` method. If you are looking for an efficient way of pre-tokenizing the dataset, saving/caching it for future use, and loading it for training then I would recommend you to take a look at [datasets](https://github.com/huggingface/datasets) library. It takes care of caching your pre-processed data and loading it efficiently (lazy loading, so memory won't blow up)<|||||>Thanks Suraj <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,359
closed
Training loss not getting logged
While training GPT2 using run_clm.py I wanted to track the training loss as well but could not find a way to do that with evaluation strategy = epoch. So I tried to look deeper into the code and found that may be adding `control.should_log = True` after line referred below will start logging training loss after every epoch. https://github.com/huggingface/transformers/blob/ae333d04b29a25be1a70eaccd6260c294c243c5b/src/transformers/trainer_callback.py#L422 Please correct me if I am wrong and suggest how should I track training loss per epoch? Thanks in advance.
12-30-2020 12:32:15
12-30-2020 12:32:15
@sgugger is the best suited to answer you<|||||>This option is not implemented in Trainer.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,358
closed
error while finetuning for Regression task.
Hi I was trying to perform finetuning regression task . below is my network. ``` from transformers import TFBertForSequenceClassification # model initialization base_model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased') base_model.bert.trainable=False model=tf.keras.Sequential(base_model) model.add(tf.keras.Input(shape=[720895,7],name='Input_1')) model.add(tf.keras.layers.Dense(1,activation='linear')) optimizer = tf.keras.optimizers.RMSprop(learning_rate=1e-4) model.compile(loss='mse', optimizer=optimizer, metrics=['mae', 'mse']) model.fit(train_seq,train_labels,epochs=10) ``` error is ` TypeError: Failed to convert 'TFSequenceClassifierOutput(loss=None, logits=TensorShape([None, 2]), hidden_states=None, attentions=None)' to a shape: ''logits''could not be converted to a dimension. A shape should either be single dimension (e.g. 10), or an iterable of dimensions (e.g. [1, 10, None]).` Can you please help me with this.
12-30-2020 11:33:23
12-30-2020 11:33:23
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>Was this solved somehow? <|||||>Ping @Rocketknight1 <|||||>Hi, the problem here is that our models have more than one output, and therefore don't work that well inside `Sequential`. You can do this with the [Keras functional API](https://keras.io/guides/functional_api/), or by [overriding `train_step`](https://keras.io/guides/customizing_what_happens_in_fit/) or just writing eager TF code. However, you might not need to do any of that, as our `SequenceClassification` models actually already support regression! If you set `num_labels=1`, we assume you want to do regression instead. So then the above code would just become: ``` from transformers import TFBertForSequenceClassification model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=1) optimizer = tf.keras.optimizers.RMSprop(learning_rate=1e-4) model.compile(loss='mse', optimizer=optimizer, metrics=['mae', 'mse']) model.fit(train_seq,train_labels,epochs=10) ``` You could also try replacing the optimizer with `tf.keras.optimizers.Adam(learning_rate=2e-5)`, as we find Adam usually works a bit better than RMSprop in practice on Transformer models.<|||||>Hi @Rocketknight1 I was preforming a classification task. The code is `model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=TOTAL_LABELS) for layer in model.layers: layer.trainable= True for layer in model.layers[:int(len(model.layers)*0.9) ]: layer.trainable= False` Saved the model as `tf.keras.models.save_model(model, PATH, overwrite=True, include_optimizer=True, save_format="tf")` And then got an error while loading `model= tf.keras.models.load_model(PATH)` Also can you please provide a link to the docs to set configs to mute the multiple outputs and get only logits as output from a BERT in tensorflow so that I can build a functional API and build layers on top of BERT. Thank you!<|||||>> Hi, the problem here is that our models have more than one output, and therefore don't work that well inside `Sequential`. You can do this with the [Keras functional API](https://keras.io/guides/functional_api/), or by [overriding `train_step`](https://keras.io/guides/customizing_what_happens_in_fit/) or just writing eager TF code. > > However, you might not need to do any of that, as our `SequenceClassification` models actually already support regression! If you set `num_labels=1`, we assume you want to do regression instead. So then the above code would just become: > > ``` > from transformers import TFBertForSequenceClassification > model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=1) > optimizer = tf.keras.optimizers.RMSprop(learning_rate=1e-4) > > model.compile(loss='mse', > optimizer=optimizer, > metrics=['mae', 'mse']) > model.fit(train_seq,train_labels,epochs=10) > ``` > > You could also try replacing the optimizer with `tf.keras.optimizers.Adam(learning_rate=2e-5)`, as we find Adam usually works a bit better than RMSprop in practice on Transformer models. I'm trying something very similar to this and getting: > ValueError: Failed to find data adapter that can handle input: <class 'transformers.tokenization_utils_base.BatchEncoding'>, (<class 'list'> containing values of types {"<class 'float'>"} Here is the code: ``` import tensorflow as tf from transformers import TFAutoModelForSequenceClassification, AutoTokenizer, pipeline from sklearn.model_selection import train_test_split model_name = "bert-base-cased" model = TFAutoModelForSequenceClassification.from_pretrained(model_name, num_labels=1) model.compile(optimizer="adam", loss="mse") tokenizer = AutoTokenizer.from_pretrained(model_name) max_length = 64 X_train, X_test, y_train, y_test = train_test_split(df["Clean"].tolist(), df["Y"].tolist(), test_size=0.2) train_encodings = tokenizer(X_train, truncation=True, padding=True, max_length=max_length) valid_encodings = tokenizer(X_test, truncation=True, padding=True, max_length=max_length) model.fit(train_encodings, y_train, epochs=3) ``` Any suggestions?<|||||>Hi @jhogg11 - the error happens because our tokenizers output `BatchEncoding` objects, not dicts, and Keras doesn't know what to do with them! It's also good practice to convert your labels to an array rather than passing them as a list. Try the following right before `model.fit()`: ``` X_train = dict(X_train) y_train = np.array(y_train) ```<|||||>> Hi @jhogg11 - the error happens because our tokenizers output `BatchEncoding` objects, not dicts, and Keras doesn't know what to do with them! It's also good practice to convert your labels to an array rather than passing them as a list. Try the following right before `model.fit()`: > > ``` > X_train = dict(X_train) > y_train = np.array(y_train) > ``` Trying `X_train = dict(X_train)` gives me this error (since X_train is a list): > ValueError: dictionary update sequence element #0 has length 117; 2 is required I thought you might have meant `dict(train_encodings)` so I tried that, but it gives a similar error as the previous example.<|||||>Ah, I'm sorry, you're right! I meant to type `train_encodings`. And now you mention it, I realize the problem is actually twofold. Try replacing this: ``` max_length = 64 X_train, X_test, y_train, y_test = train_test_split(df["Clean"].tolist(), df["Y"].tolist(), test_size=0.2) train_encodings = tokenizer(X_train, truncation=True, padding=True, max_length=max_length) valid_encodings = tokenizer(X_test, truncation=True, padding=True, max_length=max_length) ``` with this: ``` max_length = 64 X_train, X_test, y_train, y_test = train_test_split(df["Clean"].tolist(), df["Y"].tolist(), test_size=0.2) train_encodings = tokenizer(X_train, truncation=True, padding=True, max_length=max_length, return_tensors="np") valid_encodings = tokenizer(X_test, truncation=True, padding=True, max_length=max_length, return_tensors="np") train_encodings = dict(train_encodings) valid_encodings = dict(valid_encodings) ``` The cause of the problem is two things - firstly the `BatchEncoding` output by the tokenizer needs to be converted to a `dict`, and secondly the individual arrays output by the tokenizer need to be converted to an array format (either NumPy or TF) that Keras can understand. The `return_tensors` argument to the tokenizer will take care of that part.<|||||>Still not working. Here's the full code as of now: EDIT: updated this to be fully reproducible by pulling some random online text data into a dataframe. ``` import pandas as pd import numpy as np import tensorflow as tf from transformers import TFAutoModelForSequenceClassification, AutoTokenizer, pipeline from sklearn.model_selection import train_test_split import requests data = requests.get("https://example-files.online-convert.com/document/txt/example.txt") data = [d for d in data.text.split("\n") if d != ""] df = pd.DataFrame(data, columns=["Clean"]) df["Y"] = np.random.normal(0,1, df.shape[0]) model_name = "bert-base-cased" model = TFAutoModelForSequenceClassification.from_pretrained(model_name, num_labels=1) # loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer="adam", loss="mse") tokenizer = AutoTokenizer.from_pretrained(model_name) max_length = 64 X_train, X_test, y_train, y_test = train_test_split(df["Clean"].tolist(), df["Y"].tolist(), test_size=0.2) train_encodings = tokenizer(X_train, truncation=True, padding=True, max_length=max_length, return_tensors="np") valid_encodings = tokenizer(X_test, truncation=True, padding=True, max_length=max_length, return_tensors="np") train_encodings = dict(train_encodings) valid_encodings = dict(valid_encodings) model.fit( train_encodings, y_train, epochs=3, ) ``` The error message is: ``` ValueError: Failed to find data adapter that can handle input: (<class 'dict'> containing {"<class 'str'>"} keys and {"<class 'numpy.ndarray'>"} values), (<class 'list'> containing values of types {"<class 'float'>"}) ``` I had also looked at: https://huggingface.co/docs/transformers/v4.27.2/en/quicktour#train-with-tensorflow and https://huggingface.co/docs/transformers/v4.27.2/en/training#prepare-a-dataset, but all of the examples that I could find involve preloaded datasets. Is there a way to efficiently go from a dataframe or list to a `Dataset` object?<|||||>@jhogg11 Thanks for sharing a fully reproducible example. I believe the issue is still arising as `y_train` being passed to the model is a list. Running the following should work: ```py model.fit(train_encodings, tf.convert_to_tensor(y_train), epochs=3) ```<|||||>@amyeroberts Getting basically the same error: ``` ValueError: Failed to find data adapter that can handle input: <class 'tensorflow.python.framework.ops.EagerTensor'>, (<class 'list'> containing values of types {"<class 'float'>"}) ```` Does the code work for you? I recently had to re-install Miniconda and I'm also on an M1 Mac, which can create difficulties, so I'm wondering if it's something on my end. However, I did test a basic TF model (using random numbers) just to make sure that everything is working and it trained without issue.<|||||>@jhogg11 Yes, it works for me. When I run: ```py import pandas as pd import numpy as np import tensorflow as tf from transformers import TFAutoModelForSequenceClassification, AutoTokenizer, pipeline from sklearn.model_selection import train_test_split import requests data = requests.get("https://example-files.online-convert.com/document/txt/example.txt") data = [d for d in data.text.split("\n") if d != ""] df = pd.DataFrame(data, columns=["Clean"]) df["Y"] = np.random.normal(0,1, df.shape[0]) model_name = "bert-base-cased" model = TFAutoModelForSequenceClassification.from_pretrained(model_name, num_labels=1) # loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer="adam", loss="mse") tokenizer = AutoTokenizer.from_pretrained(model_name) max_length = 64 X_train, X_test, y_train, y_test = train_test_split(df["Clean"].tolist(), df["Y"].tolist(), test_size=0.2) train_encodings = tokenizer(X_train, truncation=True, padding=True, max_length=max_length, return_tensors="np") valid_encodings = tokenizer(X_test, truncation=True, padding=True, max_length=max_length, return_tensors="np") train_encodings = dict(train_encodings) valid_encodings = dict(valid_encodings) model.fit(train_encodings, tf.convert_to_tensor(y_train), epochs=3) ``` This was running on an M1 with ``` transformers 4.28.0.dev0 tensorflow-macos 2.10.0 tensorflow-metal 0.6.0 ```` Which versions of transformers and tensorflow are you using? <|||||>@amyeroberts I just restarted the kernel and ran with your exact code and it worked! I think I might have hastily wrapped `train_encodings` in `tf.convert_to_tensor` rather than `y_train`. I really appreciate the help.
transformers
9,357
closed
Blenderbot-3B config seems to be a little wrong
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1 - Platform: Linux - Python version: 3.8 - PyTorch version (GPU?):- - Tensorflow version (GPU?): - - Using GPU in script?: No - Using distributed or parallel set-up in script?: No It seems the current Config of `Blenderbot-3B` is a bit broken, (`Blenderbot-90M` and distill versions seem fine). ```python tokenizer = AutoTokenizer.from_pretrained('facebook/blenderbot-90M') tokenizer.decode(tokenizer.encode("Hey there")) # 'hey there' so working fine tokenizer.decode(tokenizer.encode("Hey there")) # '<unk> y <unk> e' obvious error as the tokens as 'ĠHey' exists in the vocab. Error is possibly linked to '@@' string terminator config ---- # Other example that's probably linked but that originally triggered the issue so we need to make sure it's fixed too nlp = pipeline('text-generation', model='blenderbot-3B') nlp("Hey there") # {"generated_text": "'ĠHi, Ġhow Ġare Ġyou Ġtoday? ĠI Ġjust Ġgot Ġback Ġfrom Ġa Ġwalk, Ġit Ġwas Ġnice."} ``` ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> @patrickvonplaten @patil-suraj ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## Expected behavior The tokenization should be better at encoding for 3B. And the pipeline should not output garbage Ġ everywhere. <!-- A clear and concise description of what you would expect to happen. -->
12-30-2020 10:07:22
12-30-2020 10:07:22
This issue has been stale for 1 month.<|||||>Closing this, blenderbot 90M is very different in Arch as other variants, so it will receive less love (it's not that powerful compared to the others anyway). Also a lot of work was done here : https://github.com/huggingface/transformers/pull/10002
transformers
9,356
closed
[examples/language-modeling] Add dataset download instructions
I had to hunt for instructions to get the dataset used in this set of examples, so this PR proposes to add them to README.md. @patrickvonplaten
12-30-2020 06:49:53
12-30-2020 06:49:53
Isn’t this one simply on the [HuggingFace datasets hub](https://huggingface.co/datasets) by the way? Op wo 30 dec. 2020 om 07:50 schreef Stas Bekman <[email protected]> > I had to hunt for instructions to get the dataset used in this set of > examples, so this PR proposes to add them to README.md. > > @patrickvonplaten <https://github.com/patrickvonplaten> > ------------------------------ > You can view, comment on, or merge this pull request online at: > > https://github.com/huggingface/transformers/pull/9356 > Commit Summary > > - [examples/language-modeling] Add dataset download instructions > > File Changes > > - *M* examples/language-modeling/README.md > <https://github.com/huggingface/transformers/pull/9356/files#diff-28c51ae2110e09a5e495a1748a8ecc3c2e3cb2f7a244c000c67a9d8c4c37adf6> > (9) > > Patch Links: > > - https://github.com/huggingface/transformers/pull/9356.patch > - https://github.com/huggingface/transformers/pull/9356.diff > > — > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/pull/9356>, or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABYDIHJOJFTITANOLIIFNY3SXLESTANCNFSM4VOADSUA> > . > <|||||>Sure, let's have the equivalent instructions to retrieve that from HF `datasets`. It doesn't really matter where it comes from as long as it doesn't require the user to go and search for it. FWIW, I went to https://huggingface.co/datasets and: 1. couldn't find it. That is I did find `wikitext`, but how do I know that it's the same as `wikitext-2-raw-v1` that the script expects - it seems to be very specific. 2. it gives me no instructions on how to download it in the format the script expects it in. p.s. it looks like Email replies do not support Markdown. <|||||>There is no need to download the data manually with the new scripts, it is done automatically by the datasets library. So this should not be added in my opinion.<|||||>oh, ok, I guess I didn't pay attention to the command line being changed and assumed that I needed to get the dataset first. I stand corrected. Thank you for your feedback.
transformers
9,355
closed
Fix typos in README and bugs in RAG example code for end-to-end evaluation and finetuning
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> This PR fixes bugs in RAG example code for [end-to-end evaluation](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag#end-to-end-evaluation) and [finetuning](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag#finetuning). ## 1. Follow the file paths of reorganized examples Also, the file paths for example code in README are updated (`example/rag/` -> `example/research_projects/rag/`) ## 2. End-to-end evaluation ``` python examples/research_projects/rag/eval_rag.py \ --model_name_or_path facebook/rag-sequence-nq \ --model_type rag_sequence \ --evaluation_set path/to/dev.source \ --gold_data_path path/to/dev.gold_data \ # parsed `biencoder-nq-dev.json` following `qa` format --predictions_path path/to/e2e_preds.txt \ --eval_mode e2e \ --gold_data_mode qa \ --n_docs 5 \ # You can experiment with retrieving different number of documents at evaluation time --print_predictions \ --recalculate ``` With the above command, I encountered a few errors: 1. an unexpected keyword argument 'clean_up_tokenization' ``` Some weights of RagSequenceForGeneration were not initialized from the model checkpoint at facebook/rag-sequence-nq and are newly initialized: ['rag.generator.lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. initializing retrieval Loading index from https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/ loading file https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/hf_bert_base.hnswSQ8_correct_phi_128.c_index.index.dpr from cache at /home/ubuntu/.cache/huggingface/transformers/a481b3aaed56325cb8901610e03e76f93b47f4284a1392d85e2ba5ce5d40d174.a382b038f1ea97c4fbad3098cd4a881a7cd4c5f73902c093e0c560511655cc0b loading file https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/hf_bert_base.hnswSQ8_correct_phi_128.c_index.index_meta.dpr from cache at /home/ubuntu/.cache/huggingface/transformers/bb9560964463bc761c682818cbdb4e1662e91d25a9407afb102970f00445678c.f8cbe3240b82ffaad54506b5c13c63d26ff873d5cfabbc30eef9ad668264bab4 7it [00:00, 54.03it/s] Traceback (most recent call last): File "examples/research_projects/rag/eval_rag.py", line 314, in <module> main(args) File "examples/research_projects/rag/eval_rag.py", line 300, in main answers = evaluate_batch_fn(args, model, questions) File "examples/research_projects/rag/eval_rag.py", line 134, in evaluate_batch_e2e print_docs=args.print_docs, File "/home/ubuntu/.local/share/virtualenvs/transformers-zPEj0XTF/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/home/ubuntu/workspace/transformers/src/transformers/models/rag/modeling_rag.py", line 923, in generate **model_kwargs, File "/home/ubuntu/.local/share/virtualenvs/transformers-zPEj0XTF/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/home/ubuntu/workspace/transformers/src/transformers/generation_utils.py", line 503, in generate model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs) File "/home/ubuntu/workspace/transformers/src/transformers/generation_utils.py", line 86, in _prepare_encoder_decoder_kwargs_for_generation model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs) File "/home/ubuntu/.local/share/virtualenvs/transformers-zPEj0XTF/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'clean_up_tokenization' ``` 2. another unexpected keyword argument 'print_docs' ``` Some weights of RagSequenceForGeneration were not initialized from the model checkpoint at facebook/rag-sequence-nq and are newly initialized: ['rag.generator.lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. initializing retrieval Loading index from https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/ loading file https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/hf_bert_base.hnswSQ8_correct_phi_128.c_index.index.dpr from cache at /home/ubuntu/.cache/huggingface/transformers/a481b3aaed56325cb8901610e03e76f93b47f4284a1392d85e2ba5ce5d40d174.a382b038f1ea97c4fbad3098cd4a881a7cd4c5f73902c093e0c560511655cc0b loading file https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/hf_bert_base.hnswSQ8_correct_phi_128.c_index.index_meta.dpr from cache at /home/ubuntu/.cache/huggingface/transformers/bb9560964463bc761c682818cbdb4e1662e91d25a9407afb102970f00445678c.f8cbe3240b82ffaad54506b5c13c63d26ff873d5cfabbc30eef9ad668264bab4 7it [00:00, 45.43it/s] Traceback (most recent call last): File "examples/research_projects/rag/eval_rag.py", line 314, in <module> main(args) File "examples/research_projects/rag/eval_rag.py", line 300, in main answers = evaluate_batch_fn(args, model, questions) File "examples/research_projects/rag/eval_rag.py", line 134, in evaluate_batch_e2e print_docs=args.print_docs, File "/home/ubuntu/.local/share/virtualenvs/transformers-zPEj0XTF/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/home/ubuntu/workspace/transformers/src/transformers/models/rag/modeling_rag.py", line 923, in generate **model_kwargs, File "/home/ubuntu/.local/share/virtualenvs/transformers-zPEj0XTF/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/home/ubuntu/workspace/transformers/src/transformers/generation_utils.py", line 503, in generate model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs) File "/home/ubuntu/workspace/transformers/src/transformers/generation_utils.py", line 86, in _prepare_encoder_decoder_kwargs_for_generation model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs) File "/home/ubuntu/.local/share/virtualenvs/transformers-zPEj0XTF/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'print_docs' ``` ## 3. Finetuning ``` python examples/research_projects/rag/finetune_rag.py \ --data_dir $DATA_DIR \ --output_dir $OUTPUT_DIR \ --model_name_or_path $MODEL_NAME_OR_PATH \ --model_type rag_sequence \ --fp16 \ --gpus 8 ``` With the above command, I found two easy bugs to be fixed: 1. [missing `return parser`](https://github.com/huggingface/transformers/blob/8217d4e37fce48490a68af7e8ce902af16318132/examples/research_projects/rag/finetune_rag.py#L498) returns None to `parser` and crashes [here](https://github.com/huggingface/transformers/blob/8217d4e37fce48490a68af7e8ce902af16318132/examples/research_projects/rag/finetune_rag.py#L528-L531) 2. [duplicated argument with `num_retrieval_workers`](https://github.com/huggingface/transformers/blob/8217d4e37fce48490a68af7e8ce902af16318132/examples/research_projects/rag/finetune_rag.py#L490-L508) is also a blocker when using `finetune_rag.py` ## Environments - Ubuntu 18.04 LTS - Python 3.7.7 - transformers (I tried both 4.1.1 from pip and from repo https://github.com/huggingface/transformers/commit/912f6881d2b69f180522172a5283702bd8c41d9c) - torch: 1.7.1 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 --> @patrickvonplaten @lhoestq
12-30-2020 05:25:13
12-30-2020 05:25:13
Hi @patrickvonplaten, Thank you for reviewing this PR! As commented above, the argument `num_retrieval_workers` in `add_ray_specific_args` is duplicate ([first defined in `add_retriever_specific_args`](https://github.com/huggingface/transformers/blob/8217d4e37fce48490a68af7e8ce902af16318132/examples/research_projects/rag/finetune_rag.py#L490-L508)) and causes an error.<|||||>Great work @yoshitomo-matsubara <|||||>Thank you for reviewing PR @patrickvonplaten @lhoestq !
transformers
9,354
closed
[test_model_parallelization] multiple fixes
There are multiple issues with the current common `test_model_parallelization` test. The main issue is that it uses `nvidia-smi` to take memory snapshots. This leads to 2 potential problems: 1. these tests must not be run with pytest distributed, as they rely on all the GPUs being unused - and with `-n 2` or higher it's likely to break, since `nvidia-smi` would be indiscriminately reporting memory used by other pytest workers. I fixed this problem first by creating a new `@require_no_pytest_distributed` decorator at https://gist.github.com/stas00/5d58c606dbdcb82e019d6b0674f8b42a - but once the 2nd problem was fixed it no longer was needed so I removed it. I don't think we currently have any tests that must be run without `pytest-xdist`, but if any come in the future we can merge that skip decorator too. 2. this implementation can easily return incorrect info if CUDA device order doesn't match nvidia-smi device order (my case and this test fails for me) - so one has to use `CUDA_VISIBLE_DEVICES` to match CUDA device order to nvidia-smi's for this test to pass. Switching to `torch.cuda.memory_allocated` fixes both problems as it measures memory usage for the current process only and in the correct order - i.e. `to(0)` always matches `memory_allocated(0)` device-wise. (the weird multi-line implementation has to do with https://github.com/pytorch/pytorch/issues/49952) BTW, I first thought of using `pynvml`, but it would have had the same issue.` nvidia-smi` is just another front-end to `nvml`. Other fixes: * removes hardcoded gpt2 config * adds `gc.collect`. One can't rely on exact memory measurements w/o manual `gc.collect` - since it gets triggered automatically at certain times as explained in its docs, which is often too late for what's being measured. Most of the time when you `del foo` it doesn't get reclaimed by `gc` right away. So the correct sequence when exact memory measurements are desired is: ``` del some_variable gc.collect() torch.cuda.empty_cache() # now can measure memory ``` * last sub-test adjusted to measure against the memory snapshot before that sub-test and not at the beginning of the whole test. `get_current_gpu_memory_use` might go into testing or benchmarking utils and perhaps need to change its name to match that it returns MBs, but it's good enough for now. @alexorona, please let me know if it's of interest to you for the tweaks I've been proposing - please let me know if you'd like me to tag you on these. @patrickvonplaten, @LysandreJik
12-30-2020 01:23:22
12-30-2020 01:23:22
Thank you for fixing!
transformers
9,353
closed
Fixes crash when `compute_metrics` is not passed to `Trainer` in run_mlm example
# What does this PR do? If I'm understanding the example run_mlm code correctly, the `metrics` attribute will not be present on the `train_result` object if `compute_metrics` is not passed to `Trainer`. This edit prevents the script from attempting to write the metrics to file if they don't exist. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @sgugger @stas00
12-30-2020 00:17:32
12-30-2020 00:17:32
In master `train` always returns `metrics`. Is it possible that you are running the script from master but loading `transformers` that is pre-installed and it is not master? This metrics was added just recently. Do you still get the error if you do: ``` git clone https://github.com/huggingface/transformers/ cd transformers PYTHONPATH=src examples/language-modeling/run_mlm.py ... ``` This ensures that you're using the master version in the script. Or alternatively if you tend to use the master a lot, install it with `pip install -e .[dev]` which allows you to `git pull` and not needing to reinstall anything. To verify that there is no problem in master I have just run: ``` python run_mlm.py --model_name_or_path roberta-base --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-mlm ``` and got: ``` INFO|trainer.py:1248] 2020-12-29 22:49:16,276 >> Saving model checkpoint to /tmp/test-mlm [INFO|configuration_utils.py:289] 2020-12-29 22:49:16,277 >> Configuration saved in /tmp/test-mlm/config.json [INFO|modeling_utils.py:814] 2020-12-29 22:49:16,818 >> Model weights saved in /tmp/test-mlm/pytorch_model.bin 12/29/2020 22:49:16 - INFO - __main__ - ***** Train results ***** 12/29/2020 22:49:16 - INFO - __main__ - epoch = 3.0 12/29/2020 22:49:16 - INFO - __main__ - train_runtime = 383.9452 12/29/2020 22:49:16 - INFO - __main__ - train_samples_per_second = 4.688 ``` So all seems to be in norm. <|||||>Thanks for taking a look @stas00 and for the examples ! You are correct, I had transformers 4.1.1. Looks fine when run on master. I'll close this PR
transformers
9,352
closed
[trainer] parametrize default output_dir
This PR: * fixes trainer to have the logger agree with the actual default `output_dir`, by setting it in one place and passing it as an argument to both places. The current logger falsely informs the user that `output_dir` is the current path, while using `tmp_trainer` as the path. @patrickvonplaten, @sgugger
12-29-2020 23:00:49
12-29-2020 23:00:49
transformers
9,351
closed
XLNet evaluation on SQuAD
## Environment info - `transformers` version: 4.2.0dev0 - Platform: Linux-5.3.0-64-generic-x86_64-with-debian-buster-sid - Python version: 3.7.4 - PyTorch version (GPU?): 1.7.1+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help XLNet @LysandreJik ## Information Model I am using (Bert, XLNet ...): XLNet The problem arises when using: * [x] the official example scripts: **run_qa.py** * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: **squad v2** * [ ] my own task or dataset: (give details below) ## To reproduce I installed the transformer package from source, as required. When I try to evaluate XLNet on the SQUAD dataset, however, I get a problem. In particular, I run the official script as: ``` python run_qa.py \ --model_name_or_path xlnet-base-cased \ --dataset_name squad_v2 \ --do_eval \ --version_2_with_negative \ --learning_rate 1e-4 \ --per_device_eval_batch_size=1 \ --seed 1 \ --output_dir ../../../../squad_results ``` This is the whole output, most of which is probably non relevant, for reference (error in bold) 12/29/2020 22:41:21 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 2distributed training: False, 16-bits training: False 12/29/2020 22:41:21 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=../../../../squad_results, overwrite_output_dir=False, do_train=False, do_eval=True, do_predict=False, model_parallel=False, evaluation_strategy=EvaluationStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=1, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=1e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs/Dec29_22-41-21_HLTNLP-GPU-B, logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=1, fp16=False, fp16_opt_level=O1, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=../../../../squad_results, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, fp16_backend=auto, sharded_ddp=False, label_smoothing_factor=0.0, adafactor=False) Reusing dataset squad_v2 (/home/scasola/.cache/huggingface/datasets/squad_v2/squad_v2/2.0.0/0e44b51f4035c15e218d53dc9eea5fe7123341982e524818b8500e4094fffb7b) loading configuration file https://huggingface.co/xlnet-base-cased/resolve/main/config.json from cache at /home/scasola/.cache/huggingface/transformers/06bdb0f5882dbb833618c81c3b4c996a0c79422fa2c95ffea3827f92fc2dba6b.da982e2e596ec73828dbae86525a1870e513bd63aae5a2dc773ccc840ac5c346 Model config XLNetConfig { "architectures": [ "XLNetLMHeadModel" ], "attn_type": "bi", "bi_data": false, "bos_token_id": 1, "clamp_len": -1, "d_head": 64, "d_inner": 3072, "d_model": 768, "dropout": 0.1, "end_n_top": 5, "eos_token_id": 2, "ff_activation": "gelu", "initializer_range": 0.02, "layer_norm_eps": 1e-12, "mem_len": null, "model_type": "xlnet", "n_head": 12, "n_layer": 12, "pad_token_id": 5, "reuse_len": null, "same_length": false, "start_n_top": 5, "summary_activation": "tanh", "summary_last_dropout": 0.1, "summary_type": "last", "summary_use_proj": true, "task_specific_params": { "text-generation": { "do_sample": true, "max_length": 250 } }, "untie_r": true, "use_mems_eval": true, "use_mems_train": false, "vocab_size": 32000 } loading configuration file https://huggingface.co/xlnet-base-cased/resolve/main/config.json from cache at /home/scasola/.cache/huggingface/transformers/06bdb0f5882dbb833618c81c3b4c996a0c79422fa2c95ffea3827f92fc2dba6b.da982e2e596ec73828dbae86525a1870e513bd63aae5a2dc773ccc840ac5c346 Model config XLNetConfig { "architectures": [ "XLNetLMHeadModel" ], "attn_type": "bi", "bi_data": false, "bos_token_id": 1, "clamp_len": -1, "d_head": 64, "d_inner": 3072, "d_model": 768, "dropout": 0.1, "end_n_top": 5, "eos_token_id": 2, "ff_activation": "gelu", "initializer_range": 0.02, "layer_norm_eps": 1e-12, "mem_len": null, "model_type": "xlnet", "n_head": 12, "n_layer": 12, "pad_token_id": 5, "reuse_len": null, "same_length": false, "start_n_top": 5, "summary_activation": "tanh", "summary_last_dropout": 0.1, "summary_type": "last", "summary_use_proj": true, "task_specific_params": { "text-generation": { "do_sample": true, "max_length": 250 } }, "untie_r": true, "use_mems_eval": true, "use_mems_train": false, "vocab_size": 32000 } loading file https://huggingface.co/xlnet-base-cased/resolve/main/spiece.model from cache at /home/scasola/.cache/huggingface/transformers/df73bc9f8d13bf2ea4dab95624895e45a550a0f0a825e41fc25440bf367ee3c8.d93497120e3a865e2970f26abdf7bf375896f97fde8b874b70909592a6c785c9 loading file https://huggingface.co/xlnet-base-cased/resolve/main/tokenizer.json from cache at /home/scasola/.cache/huggingface/transformers/46f47734f3dcaef7e236b9a3e887f27814e18836a8db7e6a49148000058a1a54.2a683f915238b4f560dab0c724066cf0a7de9a851e96b0fb3a1e7f0881552f53 loading weights file https://huggingface.co/xlnet-base-cased/resolve/main/pytorch_model.bin from cache at /home/scasola/.cache/huggingface/transformers/9461853998373b0b2f8ef8011a13b62a2c5f540b2c535ef3ea46ed8a062b16a9.3e214f11a50e9e03eb47535b58522fc3cc11ac67c120a9450f6276de151af987 Some weights of the model checkpoint at xlnet-base-cased were not used when initializing XLNetForQuestionAnsweringSimple: ['lm_loss.weight', 'lm_loss.bias'] - This IS expected if you are initializing XLNetForQuestionAnsweringSimple from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing XLNetForQuestionAnsweringSimple from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of XLNetForQuestionAnsweringSimple were not initialized from the model checkpoint at xlnet-base-cased and are newly initialized: ['qa_outputs.weight', 'qa_outputs.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Loading cached processed dataset at /home/scasola/.cache/huggingface/datasets/squad_v2/squad_v2/2.0.0/0e44b51f4035c15e218d53dc9eea5fe7123341982e524818b8500e4094fffb7b/cache-c46fe459ef8061d5.arrow The following columns in the evaluation set don't have a corresponding argument in `XLNetForQuestionAnsweringSimple.forward` and have been ignored: example_id, offset_mapping. 12/29/2020 22:41:30 - INFO - __main__ - *** Evaluate *** The following columns in the evaluation set don't have a corresponding argument in `XLNetForQuestionAnsweringSimple.forward` and have been ignored: example_id, offset_mapping. ***** Running Evaluation ***** Num examples = 12231 Batch size = 2 █████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6116/6116 [38:14<00:00, 3.32it/s]12/29/2020 23:19:57 - INFO - utils_qa - Post-processing 11873 example predictions split into 12231 features. 0%| | 0/11873 [00:00<?, ?it/s]**Traceback (most recent call last): | 0/11873 [00:00<?, ?it/s] File "run_qa.py", line 480, in <module> main() File "run_qa.py", line 461, in main results = trainer.evaluate() File "/home/scasola/survey/squad/xlnet/transformers/examples/question-answering/trainer_qa.py", line 62, in evaluate eval_preds = self.post_process_function(eval_examples, eval_dataset, output.predictions) File "run_qa.py", line 407, in post_processing_function is_world_process_zero=trainer.is_world_process_zero(), File "/home/scasola/survey/squad/xlnet/transformers/examples/question-answering/utils_qa.py", line 195, in postprocess_qa_predictions while predictions[i]["text"] == "": IndexError: list index out of range** ## Expected behavior Evalaution of the model saved in the output dir
12-29-2020 22:25:01
12-29-2020 22:25:01
Pinging @sgugger here. Think he has more knowledge about the training script than I do.<|||||>This is linked to [this issue](https://github.com/huggingface/tokenizers/issues/552) in the tokenizers repo. Until this is solved, the script `run_qa` does not work properly with XLNet (the offset mappings computed are incorrect). You can use `run_qa_beam_search` with the XLNet model while waiting for the issue to be solved.<|||||>Hi @sgugger, thanks for your answer. However, I'm trying to do a (fair) comparison between models, so using beam search is not an option. I might install another package version that works well with XLNet on SQuAD (I've seen, for example, that v. 3.10 also has some problems in evaluation). Do you know if any previous version is ok, at the moment?<|||||>You can always use the [legacy script](https://github.com/huggingface/transformers/blob/master/examples/legacy/question-answering/run_squad.py) if you can't wait for the fix.<|||||>Thank you very much, I was unaware of legacy scripts. Do I need a particular transformers version to run them? When I run run_squad.py at the moment I get (errors in bolds) 01/05/2021 15:51:31 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False [INFO|configuration_utils.py:431] 2021-01-05 15:51:31,306 >> loading configuration file https://huggingface.co/xlnet-base-cased/resolve/main/config.json from cache at /home/scasola/.cache/huggingface/transformers/06bdb0f5882dbb833618c81c3b4c996a0c79422fa2c95ffea3827f92fc2dba6b.da982e2e596ec73828dbae86525a1870e513bd63aae5a2dc773ccc840ac5c346 [INFO|configuration_utils.py:467] 2021-01-05 15:51:31,307 >> Model config XLNetConfig { "architectures": [ "XLNetLMHeadModel" ], "attn_type": "bi", "bi_data": false, "bos_token_id": 1, "clamp_len": -1, "d_head": 64, "d_inner": 3072, "d_model": 768, "dropout": 0.1, "end_n_top": 5, "eos_token_id": 2, "ff_activation": "gelu", "initializer_range": 0.02, "layer_norm_eps": 1e-12, "mem_len": null, "model_type": "xlnet", "n_head": 12, "n_layer": 12, "pad_token_id": 5, "reuse_len": null, "same_length": false, "start_n_top": 5, "summary_activation": "tanh", "summary_last_dropout": 0.1, "summary_type": "last", "summary_use_proj": true, "task_specific_params": { "text-generation": { "do_sample": true, "max_length": 250 } }, "untie_r": true, "use_mems_eval": true, "use_mems_train": false, "vocab_size": 32000 } [INFO|configuration_utils.py:431] 2021-01-05 15:51:31,607 >> loading configuration file https://huggingface.co/xlnet-base-cased/resolve/main/config.json from cache at /home/scasola/.cache/huggingface/transformers/06bdb0f5882dbb833618c81c3b4c996a0c79422fa2c95ffea3827f92fc2dba6b.da982e2e596ec73828dbae86525a1870e513bd63aae5a2dc773ccc840ac5c346 [INFO|configuration_utils.py:467] 2021-01-05 15:51:31,608 >> Model config XLNetConfig { "architectures": [ "XLNetLMHeadModel" ], "attn_type": "bi", "bi_data": false, "bos_token_id": 1, "clamp_len": -1, "d_head": 64, "d_inner": 3072, "d_model": 768, "dropout": 0.1, "end_n_top": 5, "eos_token_id": 2, "ff_activation": "gelu", "initializer_range": 0.02, "layer_norm_eps": 1e-12, "mem_len": null, "model_type": "xlnet", "n_head": 12, "n_layer": 12, "pad_token_id": 5, "reuse_len": null, "same_length": false, "start_n_top": 5, "summary_activation": "tanh", "summary_last_dropout": 0.1, "summary_type": "last", "summary_use_proj": true, "task_specific_params": { "text-generation": { "do_sample": true, "max_length": 250 } }, "untie_r": true, "use_mems_eval": true, "use_mems_train": false, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:1802] 2021-01-05 15:51:32,221 >> loading file https://huggingface.co/xlnet-base-cased/resolve/main/spiece.model from cache at /home/scasola/.cache/huggingface/transformers/df73bc9f8d13bf2ea4dab95624895e45a550a0f0a825e41fc25440bf367ee3c8.d93497120e3a865e2970f26abdf7bf375896f97fde8b874b70909592a6c785c9 [INFO|tokenization_utils_base.py:1802] 2021-01-05 15:51:32,222 >> loading file https://huggingface.co/xlnet-base-cased/resolve/main/tokenizer.json from cache at /home/scasola/.cache/huggingface/transformers/46f47734f3dcaef7e236b9a3e887f27814e18836a8db7e6a49148000058a1a54.2a683f915238b4f560dab0c724066cf0a7de9a851e96b0fb3a1e7f0881552f53 [INFO|modeling_utils.py:1024] 2021-01-05 15:51:32,564 >> loading weights file https://huggingface.co/xlnet-base-cased/resolve/main/pytorch_model.bin from cache at /home/scasola/.cache/huggingface/transformers/9461853998373b0b2f8ef8011a13b62a2c5f540b2c535ef3ea46ed8a062b16a9.3e214f11a50e9e03eb47535b58522fc3cc11ac67c120a9450f6276de151af987 [WARNING|modeling_utils.py:1132] 2021-01-05 15:51:35,070 >> Some weights of the model checkpoint at xlnet-base-cased were not used when initializing XLNetForQuestionAnsweringSimple: ['lm_loss.weight', 'lm_loss.bias'] ... 01/05/2021 15:51:37 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='../../../../../squad_data', device=device(type='cuda'), do_eval=True, do_lower_case=False, do_train=True, doc_stride=128, eval_all_checkpoints=True, evaluate_during_training=True, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=4, lang_id=0, learning_rate=0.001, local_rank=-1, logging_steps=500, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=384, max_steps=-1, model_name_or_path='xlnet-base-cased', model_type='xlnet', n_best_size=20, n_gpu=1, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=10.0, output_dir='../../../../squad_results/XLNet/1e-3/1', overwrite_cache=True, overwrite_output_dir=False, per_gpu_eval_batch_size=8, per_gpu_train_batch_size=8, predict_file=None, save_steps=4132, seed=1, server_ip='', server_port='', threads=1, tokenizer_name='', train_file=None, verbose_logging=False, version_2_with_negative=True, warmup_steps=4132, weight_decay=0.0) 01/05/2021 15:51:37 - INFO - __main__ - Creating features from dataset file at ../../../../../squad_data 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 442/442 [00:39<00:00, 11.33it/s]convert squad examples to features: 0%| | 0/130319 [00:00<?, ?it/s]multiprocessing.pool.RemoteTraceback: """ **Traceback (most recent call last):** File "/home/scasola/anaconda3/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/home/scasola/anaconda3/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "/home/scasola/survey/squad/mypython/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 189, in squad_convert_example_to_features return_token_type_ids=True, File "/home/scasola/survey/squad/mypython/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2462, in encode_plus **kwargs, File "/home/scasola/survey/squad/mypython/lib/python3.7/site-packages/transformers/**tokenization_utils_fast.py**", line 465, in _encode_plus **kwargs, File "/home/scasola/survey/squad/mypython/lib/python3.7/site-packages/transformers/**tokenization_utils_fast.py**", line 378, in _batch_encode_plus is_pretokenized=is_split_into_words, TypeError: TextInputSequence must be str """ **The above exception was the direct cause of the following exception: Traceback (most recent call last):** File "run_squad.py", line 833, in <module> main() File "run_squad.py", line 772, in main train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False) File "run_squad.py", line 461, in load_and_cache_examples threads=args.threads, File "/home/scasola/survey/squad/mypython/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 382, in squad_convert_examples_to_features disable=not tqdm_enabled, File "/home/scasola/survey/squad/mypython/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/home/scasola/anaconda3/lib/python3.7/multiprocessing/pool.py", line 325, in <genexpr> return (item for chunk in result for item in chunk) File "/home/scasola/anaconda3/lib/python3.7/multiprocessing/pool.py", line 748, in next raise value TypeError: TextInputSequence must be str This might be related to the tokenizer, as in #7735 . However, the used tokenizer should not be fast (see code snippet) even if it seems from the traceback that the fast tokenizer is actually called. Any workaround? ` tokenizer = AutoTokenizer.from_pretrained( args.tokenizer_name if args.tokenizer_name else args.model_name_or_path, do_lower_case=args.do_lower_case, cache_dir=args.cache_dir if args.cache_dir else None, use_fast=False, # SquadDataset is not compatible with Fast tokenizers which have a smarter overflow handeling )`<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>am having the same issue and a fix would be really nice...<|||||>Thank you for opening an issue - Unfortunately, we're limited on bandwidth and fixing QA for XLNet is quite low on our priority list. If you would like to go ahead and fix this issue, we would love to review a PR, but we won't find the time to get to it right away.
transformers
9,350
closed
[apex.normalizations.FusedLayerNorm] torch.cuda.is_available() is redundant as apex handles that internally
This PR is a follow up to https://github.com/huggingface/transformers/issues/9338 According to https://github.com/huggingface/transformers/issues/9338#issuecomment-752242098 we can just remove the `torch.cuda.is_available()` check before importing `apex.normalizations.FusedLayerNorm` and the multiprocess problem will go away. Fixes #9338 @patrickvonplaten
12-29-2020 21:47:23
12-29-2020 21:47:23
Thanks a lot for digging into this @stas00
transformers
9,349
closed
[prophetnet] wrong import
``` python -c "from apex.normalization import FusedProphetNetLayerNorm" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: cannot import name 'FusedProphetNetLayerNorm' from 'apex.normalization' (/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/apex/normalization/__init__.py) ``` It looks like this code has never been tested, so it silently fails inside try/except. Discovered this by accident in https://github.com/huggingface/transformers/issues/9338#issuecomment-752217708 @patrickvonplaten, @LysandreJik note, prophetnet is missing from .github/PULL_REQUEST_TEMPLATE.md, .github/ISSUE_TEMPLATE/bug-report.md
12-29-2020 19:53:50
12-29-2020 19:53:50
transformers
9,348
closed
Fix TF Longformer
# What does this PR do? This PR aims to fix the TF Longformer version in order to make it graph compliant. As seen offline with @patrickvonplaten `all_global_attentions` now is added in the output when `output_attentions=True`. The global attentions are filled with zeros in case `is_global_attn` is False (see line 897 in `TFLongformerSelfAttention`. # Fix issue #9333
12-29-2020 19:44:08
12-29-2020 19:44:08
I have already ran the slow tests as well and they all pass!
transformers
9,347
closed
[trainer] --model_parallel hasn't been implemented for most models
Apparently we unleashed `--model_parallel` in trainer w/o checking if the model supports MP (most don't). This PR: * [x] checks whether the model supports MP and asserts otherwise * [x] fixes the cl arg help to note that the flag will only work if the model supports MP As we are gradually starting to build MP-support a cleaner solution will be made in the future, but for now this is good enough to prevent misleading false expectations as reported in https://github.com/huggingface/transformers/issues/9336 (Also for the future, I'm not sure whether it'd be better to check `model.config.architectures`, which would be more precise than checking `model_type` since it's the `architectures` that may or may not support MP within the same `model_type` - but that's a different discussion). Fixes: #9336 @patrickvonplaten, @sgugger
12-29-2020 19:12:37
12-29-2020 19:12:37
@alexorona proposed to have the `model_parallel` method in `PreTrainedModel`, https://github.com/huggingface/transformers/pull/9323#issuecomment-752352280 which then would break this code as it'd be then present in all models. I see this PR as a quick band-aid since we released the new cl arg w/o checking that it always works. And then we will surely improve it as we generalize MP and not leave it this way. This is definitely not how it'll remain in the long run.<|||||>So should we merge this one as a hot-fix? ------------- An absolute yes to `PreTrainedModel.parallelizable` accessor - default `False`, then a `True` override for each specific model head that implements it - better than checking arch which doesn't guarantee that it'll have all heads parallelizable. And also what do you think about tests? Currently we hardcore a list of parallelizable models: https://github.com/huggingface/transformers/blob/086718ac6e20ca2e2cfa3aa0f6da9dc7ee34f6c6/tests/test_modeling_t5.py#L491 should it remain this way or should we automatically derive those from the model by iterating over `all_model_classes`: https://github.com/huggingface/transformers/blob/086718ac6e20ca2e2cfa3aa0f6da9dc7ee34f6c6/tests/test_modeling_t5.py#L489 and automatically deriving which are parallelizable. Less code to write in the future. <|||||>I'd rather merge as a hotfix the proper check and then worry about the tests in a follow up PR (I think we should have a combination of a flag (like for pruning) and checking the models having the attributes there).<|||||>It no longer will be hot, but yes, I will code that ;) thank you for the feedback, @sgugger > I think we should have a combination of a flag (like for pruning) and checking the models having the attributes there). I'm not sure what you mean here. An example would be helpful to understand what you propose.<|||||>The class `ModelTesterMixin` has a few attributes that control what common tests to apply. I just realized while reading it that it already has the `test_model_parallel` flag so this part is done already. All that is left is just to infer the models to test from the presence of the right attribute :-)<|||||>OK, I added `model.is_parallelizable` property - let me know if this looks good, or whether you prefer not using a property. if you prefer w/o `is_` or not have it a property please let me know.<|||||>> I'm fine with this design but it differs from what we were talking about, so we should check the others are fine with it too before merging. Yes, of course. that's why it is no longer a hotfix, but it seems to be fine - only one user has filed an issue about using a non-working `--model_parallel` so far.<|||||>So since the only change I proposed is from `parallelizable` to `is_parallelizable`, do you still think we ought to re-validate with @LysandreJik?<|||||>Yes, let's wait for him to review this tomorrow morning (he's on European time for the next month or so).
transformers
9,346
closed
[Seq2Seq Templates] Add forgotten imports to templates
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> I accidentally forgot to add this import in #9342. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-29-2020 18:20:42
12-29-2020 18:20:42
transformers
9,345
closed
Training of BART slow on TPU - aten ops investigation
Referencing: https://github.com/huggingface/transformers/issues/8339 Versions: * `transformers==4.0.1` * `pytorch==1.7.0` * `pytorch_xla==1.7.0` ### problem I've been trying to find out why training of `BartForConditionalGeneration` with `Trainer` is so **slow on TPU**. With slow I mean >30 min/batch (8000 samples) on 8 cores of TPUv3. I can very likely exclude slowdowns on behalf of the host machine. Following the [xla troubleshooting guide](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md), I made sure that the **input tensors have fixed shape**. Furthermore, I created a `metrics` report which reveals that there occur many context switches between the XLA device and CPU due to: * aten::_local_scalar_dense * aten::isnan I tried to localize the culprits via **debugging** on 1 TPU core and printing the metrics report at each breakpoint. `aten::isnan` is obviously caused by `torch.isnan` in [L362 of BertEncoderLayer](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_bart.py#L362). I don't know any fix but I'd just turn the condition off for my training. ### questions 1. According to this [issue](https://github.com/pytorch/xla/issues/909), the counter for `aten::_local_scalar_dense` increases "every time the Python code switches from tensor context" to Python scalar context". I couldn't really pinpoint, however, which line causes this. I'm aware that the printing of the metrics report at each breakpoint does so but there are others. Any idea? 2. Just to be on the safe side: `pytorch_xla`'s wrapper `ParallelLoader` for the `DataLoader` doesn't have to do anything with TPU-related slowdown or context-switch in e.g. the `Dataset` instance because the loading still takes place on the host machine and the tensors are put into TPU-queues _afterwards_ as far as I read the code, correct? 3. Is training of `BART` (and probably similar models) running fast on TPU for anyone? AFAIU @patil-suraj knows more.
12-29-2020 17:42:32
12-29-2020 17:42:32
Hi @phtephanx Thank you for troubleshooting the problem. I haven't really dived into this problem yet. Maybe @LysandreJik (our TPU expert ) will be able to help here once he's back from vacation ;) Also cc @patrickvonplaten <|||||>Hey @phtephanx, Thanks for opening this issue! I'm actually very interested in solving this problem as well...did you try executing Bart on TPU with just a simple training loop to see if the slow-down persists? Maybe we can work together in a colab to solve this problem. Feel free to dive a bit more into the problem if you feel like it (I'd suggest on a google colab with TPU) and I'm happy to guide you along the way. Else I hope to find some time in mid-January to tackle the problem :-) <|||||>@patrickvonplaten Sorry for the late reply. > Thanks for opening this issue! I'm actually very interested in solving this problem as well...did you try executing Bart on TPU with just a simple training loop to see if the slow-down persists? Maybe we can work together in a colab to solve this problem. Feel free to dive a bit more into the problem if you feel like it (I'd suggest on a google colab with TPU) and I'm happy to guide you along the way. Else I hope to find some time in mid-January to tackle the problem :-) Would be great if we could cooperate on this because I'm stuck with this for a while! ;-) I created two minimalistic training notebooks: * **(I)** [BART on TPU w/o Trainer](https://colab.research.google.com/drive/10crQewhWImt9vHD1UJo-HnzzSgbwwFyA?usp=sharing) * **(II)** [BART on TPU w/ Trainer](https://colab.research.google.com/drive/1C_8EmDmnisYPLfIkwL7tu-_iVWoPuu1I?usp=sharingv) **Settings:** * `BART-base` * `batch_size=64` * `gradient_accumulation_steps=1` **Observations:** * **(I)** and **(II)** run almost equally fast. Their graph seems to stabilize after approx. 4 steps and subsequently runs at constant throughput of 0.43 [it/s] (~ 2.34 [s/it]) * **(I)** and **(II)** exhibit (almost) the same count of `aten::isnan` and `aten::_local_scalar_dense`. Thus, no significant number of ops (only 1) w/o XLA lowering is introduced by the `Trainer` **Conclusions:** * The throughput of both is really decent which indicates IMO that everything is probably ok with the TPU adaptation of `BART` and `Trainer` even if these two ops w/o XLA lowering occur * (I couldn't, however, reproduce this throughput on a private GCE VM, so far at all. If you're still interested, I'll take the exact same script and report!) BTW: I also tried out the training loop of (I) **without** wrapping it into a function and calling `xmp.spawn` on it. The throughput is very low and the graph never stabilizes which was suggested by a `CompileTime` of more than 20 [min].<|||||>**Reproducing (II) on GCE VM:** I conducted a run for **(II)** on **1** core of TPUv3. Apart from the `CompileTime` being considerably larger, which is expected because Colab somehow works instantaneously, the execution time is similar ([metrics_report.txt](https://github.com/huggingface/transformers/files/5762134/tpu_bart_w_trainer_1_core.txt)). It was actually a bit faster: 6 [s] on GCE VM vs. 12 [s] on Colab. Furthermore, I observed that the `CompileTime` is even noticeably larger when using **8** cores. (It might be that during my actual targeted training of `BART-large` on 8 cores, the stabilization phase of the graph simply takes much longer than for `BART-base` and I never arrived at a stable graph).<|||||>Hmm, so it works as expected on a google colab, but not on a private machine? The behavior you described for (I) and (II) seems reasonable to me. It's normal that compilation time is quite high for PyTorch/XLA IMO<|||||>I did some runs for `BART-large` on GCE TPU v3-8 with different settings using `Trainer`: | batch-size | lengths | num-cores | grad-acc | initial-speed [s/it] | final-speed [s/it] | final speed at step | |------------|---------|-----------|----------|----------------------|--------------------|---------------------| | 1 | 128 | 1 | 1 | 120 | 1 | ~7 | | 32 | 128 | 1 | 1 | 318 | 2.3 | ~6 | | 32 | 128 | 8 | 1 | 300 | 3 | ~12 | | 32 | 128 | 8 | 4 | 440 | 14.2 | ~20 | Extrapolating these numbers to the hparams used by the authors (batch size of 8000) results in the "slow" throughput due to which I opened the issue. I think, everything is fine - one just needs a bigger device like TPU v3-128. Out of scope for me. @patrickvonplaten Feel free to close unless there's something else we can discuss / tune.<|||||>Hey @phtephanx, Thanks a lot for posting this, it's very useful! Yeah, I think for now I don't see a big issue either<|||||>> Hey @phtephanx, > > Thanks a lot for posting this, it's very useful! Yeah, I think for now I don't see a big issue either You're welcome ;)<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,344
closed
MBart prepare_seq2seq_batch
- `transformers` version: 4.1.1 - mBART: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): mBART The problem arises when using: * [x] the official example scripts: (give details below) `example_english_phrase = "UN Chief Says There Is No Military Solution in Syria" expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria" batch = tokenizer.prepare_seq2seq_batch(example_english_phrase, src_lang="en_XX", tgt_lang="ro_RO", tgt_texts=expected_translation_romanian, return_tensors="pt") model(input_ids=batch['input_ids'], labels=batch['labels']) # forward pass` The encoded target sequence from "prepare_seq2seq_batch" is inconsistent with its description "[tgt_lang_code] X [eos]". Moreover, "labels" doesn't seem to be the appropriate param for the model input.
12-29-2020 16:31:48
12-29-2020 16:31:48
Hi @Chiyu-Song Not sure what exactly is the issue here. What do you mean by > encoded target sequence from "prepare_seq2seq_batch" is inconsistent with its description Also would be nice if the code snippet is formatted, bit hard to read :)<|||||>Thank you for your reply @patrickvonplaten, let me rephrase the issue a bit. ![image](https://user-images.githubusercontent.com/66252554/103333727-cd58a800-4aa9-11eb-8e6a-3d0921dbe94c.png) The screenshot above is from the [MBart official documentation](https://huggingface.co/transformers/model_doc/mbart.html), introducing how to use "prepare_seq2seq_batch()" to encode input sequences for fine-tuning. However, after running the example code on the screenshot, I got something unexpected: 1. "prepare_seq2seq_batch()" returns a dict with three keys "input_ids", "attention_mask" and "labels". The value of "labels" is actually the encoder_input_ids, so I think this key name is a bit confusing. 2. The returned "labels"(encoder_input_ids) has a format "X [eos, tgt_lang_code]", but according to the description on the screenshot, it supposes to be "[tgt_lang_code] X [eos]". 3. On the last line of the code snippet, "labels" doesn't seem to be the appropriate param for the model input, I believe "encoder_input_ids" should be used instead for fine-tuning. Opinions are my own, plz feel free to correct any of my misunderstandings.<|||||>1. `labels` is the correct name. `labels` are the tokens/output we expect the model (specifically the `decoder` to generate). The `MbartForConditionalGeneration` model prepares the `decoder_input_ids` (which are fed as input to the decoder) using the `labels`. It's a convention to use the name `labels` for the output of models. here `input_ids` is the input to the `encoder`. We don't use the name `encoder_input_ids` 2. As said above the model prepares the `decoder_input_ids` from `labels`. It does so by shifting the `labels` to the right and wrapping around the last token. so if the `labels` are `X [eos, tgt_lang_code]` then `decoder_input_ids` are prepared as follows `[tgt_lang_code] X [eos]` i.e shift to the right and wrap around the last token. which is the target format expected by the model. 3. `labels` is not the name for model input, it's the output name used by all library models. `input_ids` is the model input. And yes, you are right. The doc is a little confusing. In the doc target text actually refers to the decoder input which is prepared using `labels`. Feel free to raise a PR to fix the doc :) <|||||>Hi Suraj, First of all, for the sake of clarity, I'd use the name "tokenizer.labels" to represent the prepared decoder_input_ids returned by "prepare_seq2seq_batch()". And use "model.labels" to represent the input param for "BartForConditionalGeneration" model. According to the documentation, this param is used for computing the MLM loss and should has nothing to with the decoder_input_ids. I double checked the source code of Bart model, in "BartForConditionalGeneration" it has a line of code like this: `decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id)` which means it gets decoder_input_ids by shifting model.labels. It indeed fixes the thrid point I mentioned in the previous comment, but also breaks the designed functionality of model.labels. To me, it seems like using a bug to cover another, and I really believe someone confused tokenizer.labels with model.labels during implementation. -Chiyu<|||||>> I'd use the name "tokenizer.labels It's already called `labels`. `prepare_seq2seq_batch` returns `input_ids`, `attention_mask`, `labels` and `decoder_attentoion_mask` > but also breaks the designed functionality of model.labels AFAIk it doesn't break any functionality. Could you show an example where it breaks ? <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,343
closed
[PyTorch Bart] Split Bart into different models
# What does this PR do? This PR splits all Bart-like models into their own respective classes for PyTorch models only. This is more in line with the general philosophy of the library to have self-contained model files. As discussed with @jplu, the TF models will be separated in a future PR as there are still some issues and improvements (TF serving) blocking the separation - see https://github.com/huggingface/transformers/issues/9313. In short, after this PR all those "model-specific" config parameters are removed from all Bart-like configs: - `extra_pos_embeddings` - `normalize_embedding` - `add_final_layer_norm` - `normalize_before` - `do_blenderbot_90_layernorm` - `static_position_embeddings` - `add_bias_logits` - `force_bos_token_to_be_generated` (this one has to be kept for Bart though) and each "bart" model (Pegasus, Bart, MBart, Marian, Blenderbot, BlenderbotSmall) will get its own `modeling_....py` file. At the moment the models have the following configurations: | | `extra_pos_embeddings` | `normalize_before` | `add_final_layer_norm` | `do_blenderbot_90_layernorm` | `normalize_embedding` | `static_position_embeddings` | `add_bias_logits` | `force_bos_token_to_be_generated` | |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | `bart` | 2 | ❌ | ❌ | ❌ | ✔️ | ❌ | ❌ | ✔️ | | `mbart` | 2 | ✔️ | ✔️ | ❌ | ✔️ | ❌ | ❌ | ❌ | | `marian` | ❌ | ❌ | ❌ | ❌ | ❌ | ✔️ | ❌ | ❌ | | `pegasus` | ❌ | ✔️ | ✔️ | ❌ | ❌ | ✔️ | ❌ | ❌ | | `blenderbot90M (BlenderbotSmall)` | 0 | ❌ | ❌ | ✔️ | ✔️ | ❌ | ❌ | ❌ | | `blenderbot3B + rest (Blenderbot)` | 0 | ✔️ | ✔️ | ✔️ | ❌ | ❌ | ❌ | ❌ | We can see that `add_bias_logits` is actually never used, so I think the best option is to just delete the functionality. Also, one can see that no two models have the exact same usage of the above params, so we'll make 6 different modeling_....py files. ## Resulting Improvements: - The model files are much more readable and should be much easier to navigate for the user. No difficult config parameters anymore where the user doesn't know what to set anyways, such as `normalize_before`. - All irrelevant Bart-like features for other models are removed. Those features are a) never mentioned in the paper, b) don't make any sense since the model wasn't trained with those features, so that the usage of those features leads to non-sense outputs. *E.g.* Marian was never supposed to be a "mask-filling" model, yet it has "mask-filling" functionality, when doing: ```python marian = MarianMTModel.from_pretrained(...) marian(input_ids) # no decoder_input_ids for mask filling like tasks such as in Bart # => output makes 0 sense ``` The big gain here is that users are better guided on how to use the model and wonder less about whether the model is used correctly & whether there is a bug in the model. - Docstrings are improved with more Model-specific examples and fewer comparisons to Bart. *E.g.* Pegasus, Marian, and Blenderbot never really mention BART in their paper and have no direct relation to BART IMO => these models should not be compared to BART in the docs -> it's confusing for the user - Some small improvements, memory is slightly improved for beam search and gradient checkpointing is added. - All previous tests are copied + some additional tests are added for each model ## Possible drawback - The drawback as expected is code duplication. This is remedied to some extent by using the # Copied from ... safety features - Some breaking changes as explained further below - Models might now diverge easier in the future which could make it harder to have the same API for training. This is however also prevented by some function signature tests that are already implemented. ## Breaking changes 🚨🚨 **Important: We cannot keep 100% backward compatibility here or the PR won't make much sense** 🚨🚨 - Since all models were packed into a single model file a lot of different model design are at the moment possible. E.g. Pegasus was only ever used with Sinusoidal position embeddings (as mentioned in the paper) but since it's merged into `modeling_bart.py`, one could theoretically use Pegasus with Learned position embeddings. This is not done in any config on the model hub however and will not be possible anymore after the PR. Also, Marian's model design has never normalized the word embeddings, but it could be possible with the current design. But again no config in the model hub does that, so this will also not be possible anymore after the PR. **In short: All model designs that were never foreseen in the original model and that are never used on the model hub at the moment won't be allowed anymore after the PR**. If we would not make this change, it would mean that we would have to keep all those `normalize_before` configs, which in return would mean that the modeling code of all Bart-like models would be the same again. - Blenderbot needs to be divided into two models IMO. Blenderbot 90M not only has a very different architecture (see table above), but also uses a different tokenizer. I created a new `BlenderbotSmallModel` class. Thus I need to update one Blenderbot config online, changing it's class. This means that from this PR onward the following is not supported anymore: ```python from transformers import BlenderbotForConditionalGeneration model = BlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-90M") # => this is a wrong model. It should be model = BlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot-90M") ``` That's a big breaking change, but I don't see another way. If we keep the small blenderbot in the "normal" blenderbot, we have to keep the config params `normalize_before` which I really don't want to do.... I think the best option here is to add a warning (or even an error) by overwriting `from_pretrained(...)` in `BlenderbotForConditionalGeneration` so that ```python model = BlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-90M") ``` will throw an error or give a warning. There are no fine-tuning blenderbot models on the hub and this is the only config. I think it's the right approach to separate the model here - Barthez has essentially a `mbart` architecture, but has `bart` defined as its `model_type` in the configs. Here I'd also like to change the configs online to make sure the correct model is loaded when using `AutoModelForSeq2SeqLM`. I should also contact the author here. - Bart allowed to automatically create `decoder_input_ids` by shifting the `input_ids` to the right. Thus, in Bart one can do the following: ```python bart = BartForConditionalGeneration(...) bart(input_ids) # not that no decoder_input_ids are passed here ``` This is a very special case and should only be used for Bart-like denoising pre-training or mask-filling. The only models that were trained in this fashion and thus can do mask-filling are Bart and MBart. All other models cannot do mask-filling so that `decoder_input_ids` should never be created from shifting `input_ids`. => this feature is removed therefore from Pegasus, Marian, Blenderbot, and BlenderbotSmall Those are all breaking changes. Blenderbot is the big one, the other one should be fine. To be sure, I wrote some scripts that verify that no model on the model hub that contains one of the keywords `bart`, `mbart`, `pegasus`, `blenderbot`, `opus-mt`, `barthez` has incorrect/unexpected parameter settings after the PR. ## TODO: - [x] Create Bart model file & pass all tests - [x] Create MBart model file & pass all tests - [x] Greate Pegasus model file & pass all tests - [x] Create Marian model file & pass all tests - [x] Create Blenderbot model file & pass all tests - [x] Create BlenderbotSmall model file & pass all tests - [x] Clean PR (delete all helper files) - [x] Clean docs - [x] Add #Copied From statements - [x] To a very in-detail review of own PR to make sure no hidden bugs were introduced. - [x] Correct configs of barthez online to be of type `mbart` instead of `bart`. - [x] Correct config of https://huggingface.co/facebook/blenderbot-90M online. ## Future TODO: - [ ] Communitace about this PR on the forum - [ ] Add Copied From statements to seq2seq bart model templates - [ ] Add Copied From statements to LED
12-29-2020 15:55:30
12-29-2020 15:55:30
One important comment I forgot to add to my review: I don't think we should adapt the `research_project` to the new structure as it has been pinned to an earlier version of transformers (3.5.1). So apart from the duplicate file deleted, the other changes should be reverted IMO.
transformers
9,342
closed
[Seq2Seq Templates] Add embedding scale to templates
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The config.embed_scale parameter is too heavily used in Bart-like models to delete it in future leaner bart versions. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-29-2020 15:46:09
12-29-2020 15:46:09
transformers
9,341
closed
[PyTorch Bart] Split Bart
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-29-2020 15:37:45
12-29-2020 15:37:45
transformers
9,340
closed
Possible bug in `train_batch_size`
## Environment info - `transformers` version: 4.1.1 - Platform: Linux-4.4.0-62-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help Trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [X ] my own modified scripts: (give details below) I'm running a model on a toy dataset with only 2 examples and a batch size of 2. In trainer, `num_examples` is 2, but `total_train_batch_size` is 12 even though I do not have the `model_parallel` flag set to `True` (Note I do have 6 GPUs available on the machine). This doesn't seem to impact my code because `train_dataset_is_sized=True`, but it seems strange. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X] my own task or dataset: (give details below) toy classification jsonl dataset with 2 examples ## To reproduce I think that [this line](https://github.com/huggingface/transformers/blob/64103fb6beac8cc865945d3956266fd80b44f18f/src/transformers/training_args.py#L454) has an unnecessary `not`. Should this be `if self.model_parallel` instead of `if not self.model_parallel`? Thanks!
12-29-2020 14:12:11
12-29-2020 14:12:11
Think @sgugger can best answer here when he's back from holiday :-) <|||||>You misunderstand the flag `model_parallel`, it's not there to enable the use of several GPUs as this is done automatically by the `Trainer` (you have to set `CUDA_VISIBLE_DEVICES` to just one GPU if you don't want the Trainer to use them all). That flag is there to split the model layers on the various GPUs available (only available for a few models).<|||||>Got it, I didn't realize that the Trainer automatically uses multiple GPUs if visible. Thanks!
transformers
9,339
closed
Arrow file is too large when saving vector data
I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file?
12-29-2020 13:16:36
12-29-2020 13:16:36
I think it's expected because with `bert-base` each token will be embedded as a 768-dimensional vector. So if an example has n tokens then the size of embedding will be `n*768` and these are all 32-bits floating-point numbers.<|||||>Yes. I use datasets and I think this is a question about datasets, how to save vector data in a compressed format to reduce the size of the file. So I close this issue.
transformers
9,338
closed
Multiprocessing CUDA issues when importing transformers
## Environment info - `transformers` version: 4.2.0dev0 - Platform: Linux-4.15.0-128-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Partly ### Who can help @stas00 ## Information When using multiprocessing, importing the transformers package causes `RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method` The problem arises when using: * `import transformers` is used in the main process. This is due to the [following line](https://github.com/huggingface/transformers/blob/master/src/transformers/models/fsmt/modeling_fsmt.py#L268) in `modeling_fsmt.py`, removing `torch.cuda.is_available()` call resolves the issue. ## To reproduce ``` import multiprocessing import transformers # NOQA # You can also call torch.cuda instead of import transformers to get the same error import torch.nn as nn class Net(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(10, 10) def forward(self, x): return self.linear(x) def to_cuda(i): net = Net().cuda() print(f'Called {i} process') try: cpus = multiprocessing.cpu_count() except NotImplementedError: cpus = 2 # arbitrary default pool = multiprocessing.Pool(processes=cpus) pool.map(to_cuda, range(10)) ``` ## Expected behavior The code snippet above runs without issues.
12-29-2020 11:17:37
12-29-2020 11:17:37
Hey @maxjeblick, We do need to be able to call `torch.cuda` in our model code. We cannot "forbid" calls to `torch.cuda` in order to allow quite specific use cases where `.cuda()` is mixed with multiprocessing. IMO, using multiprocessing in combination with `.cuda()` is quite an edge case. Are you trying to run your model on multiple GPUs? I'd suggest to fork the repo and delete this line if you really need this feature.<|||||>Hey @patrickvonplaten thanks for the fast response! I agree that using multiprocessing with `.cuda` calls is not that common, one usecase would be multi-gpu inference without DDP. The `torch.cuda.is_available()` call in `modeling_fsmt.py` is currently the only place which causes the `RuntimeError`; all other `.cuda` calls are either fine (e.g. `from torch.cuda.amp import autocast` in `trainer.py`) or not executed during the `import transformers` statement.<|||||>It affects bart as well. I see ProphetNetLayerNorm solved it by using a runtime wrapper. https://github.com/huggingface/transformers/blob/912f6881d2b69f180522172a5283702bd8c41d9c/src/transformers/models/prophetnet/modeling_prophetnet.py#L513-L521 So it's modeling_bart and modeling_fsmt are the only 2 that have this check at import time I wonder if we can just remove this check altogether. Won't `from apex.normalization import FusedLayerNorm` fail w/o cuda? And then we actually don't need that check. We need to verify that.<|||||>heh, it looks like `FusedProphetNetLayerNorm` importing is silently failing, since it doesn't exist in `apex` ;) looks like untested code. Fix proposed at https://github.com/huggingface/transformers/pull/9349 <|||||>Hmm, it works just fine without cuda: ``` CUDA_VISIBLE_DEVICES="" python -c "from apex.normalization import FusedLayerNorm; print(FusedLayerNorm(10))" FusedLayerNorm(torch.Size([10]), eps=1e-05, elementwise_affine=True) ``` Do you know why did we need that check in first place? The doc page https://nvidia.github.io/apex/layernorm.html doesn't say anything about needing cuda.<|||||>@t-vi figured it out - `apex.normalization.FusedLayerNorm` falls back on to non-cuda gracefully: https://github.com/NVIDIA/apex/blob/master/apex/normalization/fused_layer_norm.py#L154 so the `if torch.cuda.is_available()` check is not needed in first place.<|||||>> needed Nice, so we could actually remove this import statement then from all `FusedLayerNorm` classes no? <|||||>yes, working on this. - just 3 classes Done: https://github.com/huggingface/transformers/pull/9350<|||||>> so we could actually remove this import statement then from all FusedLayerNorm classes no? but, wait, what import statement are you referring to? and I guess `modeling_fsmt` is waiting for the refactoring, right? So since Bart I see has this import-time check removed already, so it'd follow suit anyway. <|||||>maxjeblick, while we are sorting out the nuances you can just remove that check so that you could move forward. No matter the outcome that particular call that was getting in your way won't be there once the dust settles. <|||||>Related: https://github.com/huggingface/transformers/issues/9227 I found that the changes proposed above weren't enough for cuda multiprocessing.<|||||>@jethrokuan - I didn't get a chance to try to reproduce/investigate this deeper so my commentary is as good as the discussions I read about it - I assume you tried the proposed by pytorch developers to switch to `torch.multiprocessing.set_start_method('spawn')` and it either didn't help or it works but you can't use it? https://github.com/pytorch/pytorch/issues/40403 FWIW, we started discussing postponing the loading of 3rd party modules here https://github.com/huggingface/transformers/issues/8733 and @sgugger came up with Optuna-like solution here https://github.com/sgugger/lazy_init - perhaps it can be applied to everything<|||||>The `spawn` method isn't supported by the web microframework we use, so that's not really an option. So the option I went with for my use-case was deferring the loading of `transformers` itself. My cursory investigation was that `import transformers` already initializes cuda, which wasn't the case some versions of transformers ago (3.1, I believe was fine).<|||||>Thank you for clarifying that `spawn` is not an option, @jethrokuan. Perhaps `transformers` needs an option to defer its loading for such cases. I think @sgugger may have some insights when he is back next week as he invested his time into looking into deferral in general.<|||||>Fixed by https://github.com/huggingface/transformers/commit/ae333d04b29a25be1a70eaccd6260c294c243c5b thanks a lot!
transformers
9,337
closed
[WIP][Research projects] Add folder for Music AI / Music Transformers
A new dir specifically for Music AI/Music Transformers. Created as suggested by Patrick von Platen. I am still figuring out PRs so please correct this PR if I've done something wrong. Thank you for your help/guidance and for the welcome to the Huggingface community :) Looking forward to contributing what I can :) GPT2: @LysandreJik, @patrickvonplaten Longformer, Reformer: @patrickvonplaten
12-29-2020 10:18:11
12-29-2020 10:18:11
Hey @asigalov61, The folder should be under `examples/research_projects` => so at `examples/research_projects/music_transformers`. There you are very free to add the files you want. If you want people to use your code, we suggest to add nice & readable code with a well-thought API, and a nice README.md that explains how to use your code and that also shows a nice use case.<|||||>Got it! Thank you for the advice 🙂 Do I have to use Hugginface transformers to post in that dir? Or I can use my implementations also? I also mostly work in Google Colabs, so is it ok to post colabs? Is the API a requirement to post? I do not always convert colabs to Python so I need to know this to figure out what to give priority to. Thank you. ________________________________ From: Patrick von Platen <[email protected]> Sent: Tuesday, December 29, 2020 4:44 AM To: huggingface/transformers <[email protected]> Cc: Alex <[email protected]>; Mention <[email protected]> Subject: Re: [huggingface/transformers] [WIP][Research projects] Add folder for Music AI / Music Transformers (#9337) Hey @asigalov61<https://github.com/asigalov61>, The folder should be under examples/research_projects => so at examples/research_projects/music_transformers. There you are very free to add the files you want. If you want people to use your code, we suggest to add nice & readable code with a well-thought API, and a nice README.md that explains how to use your code and that also shows a nice use case. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/pull/9337#issuecomment-752062712>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ANNXLI6A3S2M2GFCU2LLHXTSXHFMLANCNFSM4VNBN4YQ>. <|||||>> Got it! Thank you for the advice Do I have to use Hugginface transformers to post in that dir? Or I can use my implementations also? I also mostly work in Google Colabs, so is it ok to post colabs? Is the API a requirement to post? I do not always convert colabs to Python so I need to know this to figure out what to give priority to. Thank you. > […](#) > ________________________________ From: Patrick von Platen <[email protected]> Sent: Tuesday, December 29, 2020 4:44 AM To: huggingface/transformers <[email protected]> Cc: Alex <[email protected]>; Mention <[email protected]> Subject: Re: [huggingface/transformers] [WIP][Research projects] Add folder for Music AI / Music Transformers (#9337) Hey @asigalov61<https://github.com/asigalov61>, The folder should be under examples/research_projects => so at examples/research_projects/music_transformers. There you are very free to add the files you want. If you want people to use your code, we suggest to add nice & readable code with a well-thought API, and a nice README.md that explains how to use your code and that also shows a nice use case. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub<[#9337 (comment)](https://github.com/huggingface/transformers/pull/9337#issuecomment-752062712)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ANNXLI6A3S2M2GFCU2LLHXTSXHFMLANCNFSM4VNBN4YQ>. It should be based on hugging face transformers. If it's just a notebook - it might make more sense to just add it here: https://github.com/huggingface/transformers/tree/master/notebooks#community-notebooks<|||||>I was unable to translate my implementation to Huggingface transformers code because docs and examples were unclear so if someone can help me do that, I would really appreciate it. Thank you so much.<|||||>Just in case here is the direct link to Google Colab because I am not sure how ot PR it (lol)... https://github.com/asigalov61/transformers/blob/master/examples/music_transformers/Music_Reformer_TPU_Edition.ipynb<|||||>Hi! Could you let us know which docs/instructions were unclear? What were you trying to do, how can we help out? Thanks.<|||||>Certainly. First of all, I could not find any clear and exact info on how to work with custom text datasets. You mostly provide examples for datasets from your library but not much else. For example, I could not find info on how to do basic things like loading a custom txt file or how to easily tokenize it to be compatible with huggingface implementations. Another example would be the lack of complete notebooks/code. Like in the Reformer notebook by Peter, this one: notebooks/PyTorch_Reformer.ipynb at master · patrickvonplaten/notebooks (github.com)<https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb> there is not a single word about SentencePiece which was used to create Crime and Punishment tokenizer model. Also, I could not find a single example for GPT2 models + text, also in a basic and easy to use/try format. Your docs/examples are probably ok if one invests enough time and effort (or if one has pre-existing knowledge and experience with your implementations) but w/o doing so, the examples/docs are not sufficient for beginners/newcomers due to a rather steep learning curve and effort/time required. Friendly IMHO please as I do appreciate your work and your efforts regardless. To be very specific and relevant to the subject at hand, I need to know how to process and tokenize a simple line-by-line txt file that would work with Peter's Reformer's example? And also, I wanted to ask if you guys support Google Colab TPUs in any way as Reformer would train slower on GPUs? I hope this makes sense. Thank you for your help/time and understanding. Alex ________________________________ From: Lysandre Debut <[email protected]> Sent: Friday, January 22, 2021 7:02 AM To: huggingface/transformers <[email protected]> Cc: Alex <[email protected]>; Mention <[email protected]> Subject: Re: [huggingface/transformers] [WIP][Research projects] Add folder for Music AI / Music Transformers (#9337) Hi! Could you let us know which docs/instructions were unclear? What were you trying to do, how can we help out? Thanks. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/pull/9337#issuecomment-765464464>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ANNXLI3YECRJSE7MSZGRUA3S3GHO3ANCNFSM4VNBN4YQ>. <|||||>Thank you for your feedback, there are indeed some aspects of the documentation which are lacking on which we are actively working. Regarding what you mention, maybe these links can help you: > how to work with custom text datasets. We actually have an entire page dedicated to that aspect: [custom datasets](https://huggingface.co/transformers/custom_datasets.html?highlight=custom%20text) > Another example would be the lack of complete notebooks/code. Like in the Reformer notebook [...] there is not a single word about SentencePiece which was used to create Crime and Punishment tokenizer model. Indeed! If I may point you to other notebooks, the first of our official notebooks (hosted on this repository, see the `notebooks` folder at the root) is on training a tokenizer: [01-training-tokenizers](https://github.com/huggingface/transformers/blob/master/notebooks/01-training-tokenizers.ipynb). It is also visible from our [notebook documentation page](https://huggingface.co/transformers/notebooks.html). Training a tokenizer isn't done by the Transformers library in itself, but by the [Tokenizers library](https://github.com/huggingface/tokenizers). I invite you to check their [documentation](https://huggingface.co/docs/tokenizers/python/latest/quicktour.html) which contains a lot of information regarding training tokenizers. > Also, I could not find a single example for GPT2 models + text, also in a basic and easy to use/try format. If I may point you to the following parts of the documentation: - The [GPT-2 reference](https://huggingface.co/transformers/model_doc/gpt2.html) contains several snippets on how to to use GPT-2 models with text - The [generation utils](https://huggingface.co/transformers/internal/generation_utils.html#utilities-for-generation) showcase how to leverage GPT-2 to generate text. - Checking the documentation regarding the [generate](https://huggingface.co/transformers/main_classes/model.html?highlight=generate#generation) method would probably be of interest as it has an identical API on all models - even if the code examples do not showcase GPT-2 directly. - Several [GPT-2-based notebooks](https://huggingface.co/transformers/notebooks.html) are available on the notebooks page in the documentation - Finally, we focus on having an identical API between all models. Checking a guide on how to generate text has very good chances of working for any other models. The quickstart on generating text [available here](https://huggingface.co/transformers/task_summary.html#text-generation) could be of use to you, as you simply need to replace the identifier of the model checkpoint by the GPT-2 checkpoint you're interested in. > To be very specific and relevant to the subject at hand, I need to know how to process and tokenize a simple line-by-line txt file that would work with Peter's Reformer's example? And also, I wanted to ask if you guys support Google Colab TPUs in any way as Reformer would train slower on GPUs? See below for some pointers on how you can achieve this: - If you need to train a tokenizer, I invite you to check out the first notebook I mention: [01-training-tokenizers](https://github.com/huggingface/transformers/blob/master/notebooks/01-training-tokenizers.ipynb) - Once you have your tokenizer you can train your model by either loading your dataset in `datasets` as it is shown in Patrick's notebook (and is simpler! in that case you may be interested in [loading a dataset from a local file](https://huggingface.co/docs/datasets/loading_datasets.html#loading-from-local-files)), or you can load a text file as it is shown in the [custom datasets](https://huggingface.co/transformers/custom_datasets.html?highlight=custom%20text) documentation. Regarding TPU training, @patrickvonplaten can chime in about the Reformer especially. Thanks once again for your feedback.<|||||>@LysandreJik Thank you very much for the most detailed and helpful info. Much appreciated and I will definitely check it all out in a little bit as you have suggested quite a lot. This will be very useful to me and I am really looking forward to contributing to the huggingface community however I can :) Most sincerely, Alex<|||||>Great! We're looking forward to your contributions :) Let us know if we can help down the road.<|||||>@LysandreJik Thank you for the welcome and offer to help. Much appreciated. You can indeed help as I have run into problems pretty quickly... So I have spent a few hours trying to make Peter's Reformer colab work with my dataset but to no avail, unfortunately... Here is the colab: https://colab.research.google.com/drive/1R8jkADMi0vRDwaNTEz_XGQkZUsg_tm_p?usp=sharing No matter what I do or try, I get errors on training execution... I think I have loaded the dataset correctly but I most certainly can be mistaken...I know that Peter's colab works with default CP setup but I can't make it work just yet.... I saved the output/errors in the colab so that you (or anyone else can take a look) + I am attaching my dataset for you to check out... Now, I know that my dataset may not be perfect/compatible with Peter's implementation due to encoding and because it is a music dataset...so I am aware that it may not be that all straightforward in this particular case... @LysandreJik if you can help/suggest something here, I will really appreciate it as I really want to make it work for you guys... [Efficient-Virtuoso-Music-TXT-Dataset.zip](https://github.com/huggingface/transformers/files/5882917/Efficient-Virtuoso-Music-TXT-Dataset.zip) P.S. Two questions: 1) Can you enable Discussions on your repo...its a new GitHub feature and I think it would be a much better place for this kind of discussion/help/support questions??? Or if there is a place already that you prefer, we can move there with this... 2) Any news on the Performer implementations??? It is the latest and the greatest from Google and I already tried it with other's people implementations because it may be more suitable for music than Reformer + its brand new (like 6 month old)... Thank you for your time and responses. Alex. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,336
closed
"RuntimeError: Input, output and indices must be on the current device" when trying to finetune MBart
### Environment info - Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10 - Tried `transformers` versions 4.1.1 (installed with pip) and 4.2.2 (installed from master branch of the repository) - Python version: 3.7 - PyTorch version: 1.7 - Tensorflow version: 2.4 - Number of available GPU: 2 (GeForce RTX 2080 Ti, with ~11GB of memory each) ### Information Model I am using (Bert, XLNet ...): MBart -> facebook/mbart-large-cc25 The problem arises when using: the official example scripts: (details below) The tasks I am working on is: my own task or dataset: (details below) I am fine-tuning MBart using my own dataset, using the `examples/seq2seq/finetune.sh` script. When I run it on a single GPU, I get a memory error, as one GPU has not enough memory to load the MBart model. When I try to distribute the model on two GPUs, I get a RuntimeError: `RuntimeError: Input, output and indices must be on the current device` ### To reproduce I am running the script in the following way: `CUDA_VISIBLE_DEVICES=0,1 transformers/examples/seq2seq/finetune.sh --model_name_or_path "facebook/mbart-large-cc25" --output_dir output --data_dir data --overwrite_output_dir --model_parallel --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --freeze_encoder --freeze_embeds --tgt_lang "en"` I have also tried: `CUDA_VISIBLE_DEVICES=0,1 transformers/examples/seq2seq/finetune.sh --model_name_or_path "facebook/mbart-large-cc25" --output_dir output --data_dir data --overwrite_output_dir --model_parallel --tgt_lang "en"` I also tried limiting the length of source and target sentences by trying several values for `--max_target_length` and `--max_source_length'`. In addition, I tried using more GPUs (up to 4). If I run `wc -l` on my `data` directory, I get: ``` 3004 data/test.source 3004 data/test.target 686623 data/train.source 686623 data/train.target 2999 data/val.source 2999 data/val.target ```
12-29-2020 09:44:42
12-29-2020 09:44:42
Hey @mespla, Thanks for your issue! I'm afraid at the moment, we're really unsure whether we want to keep supporting all the bash scripts in `examples/seq2seq`. In a couple of weeks, we plan on having a single concise training script for seq2seq models. cc @sgugger Also tagging @stas00, @patil-suraj in case you know a quick fix to this problem or have encountered this before as well.<|||||>> When I run it on a single GPU, I get a memory error, as one GPU has not enough memory to load the MBart model. When I try to distribute the model on two GPUs, I get a RuntimeError: RuntimeError: Input, output and indices must be on the current device Are you implying you've changed modeling_bart.py to support Model Parallelism? Surely that would explain that error. You probably switched the layers to different devices but not the inputs/indices. I'm currently in the process of studying t5 MP we already have and about to do the same for Bart - i.e. add MP to Bart and its sub-classes (so MBART is included). If you mean something else by " I try to distribute the model on two GPUs" please clarify what you mean. If you're just trying to use 2 GPUs to solve the problem of not being able to load even one batch onto a single GPU, then just using 2 gpus won't do any good. In fact what you did (your command line) takes even more memory, since it activates DataParallel which is less memory efficient than DistributedDataParallel. See README.md in that folder for how to run DDP. But fear not, have a look at these 2 possible solutions for you not being able to fit the model onto a single GPU: https://github.com/huggingface/transformers/issues/9311#issuecomment-751378696 and another one will join soon once DeepSpeed has been integrated. <|||||>oh, wait a sec, I have only now noticed you used `--model_parallel`. This flag currently would work only for t5 and gpt2 - as the only 2 models that have been ported to support MP. So trainer should assert if this flag is used and arch isn't supporting MP. This PR https://github.com/huggingface/transformers/pull/9347 adds this assert. And hopefully Bart will support MP soon as well. Until then try my suggestions in the comment above.
transformers
9,335
closed
Data Loading as a Service
# 🚀 Not a Feature request, what am I here for then? Well, mainly I'd just like to get some feedback from fellow software engineers. Some of the frustrations I've experienced might not have been big issues at all or there could have been easy ways to get around them which I've failed to notice. Getting roasted can be a good way for us to identify obvious flaws in our thought processes that aren't so obvious from our own often tunneled point of view. ## Motivation Data loading in PyTorch requires the user to define the Dataset, collator, as well as a sampling strategy. I found it rather hard to stick to the framework when I have to deal with - Extremely large datasets that do not fit in system memory - The previous point + training with multiple processes and nodes For Language Modeling and Machine Translation, - We often have to deal with large text/CSV files that are multiple times the size of system memory. - Also running training on a single GPU is often slow to the point of frustration. - Then we have to duplicate the extremely large dataset across each machine. - To format the data we have into a PyTorch Dataset, we simply just do a hack where we wrap an IO stream into a PyTorch Dataset - Or preprocess the entire dataset beforehand to speed up data loading. - Sometimes we would keep an index table to "lookup" the position of a data entry in a file which can be slow if not using SSD to perform random access. Instead of trying to write code that fits into the format expected by PyTorch. I simply threw everything out the window and just made data loading a standalone service instead... ## Your contribution Well, I've uploaded my work on https://github.com/mingruimingrui/data-loader-as-a-service-demo. If you have taken your time to read up to this point, I would like to give you my gratitude as it had made me quite happy (^o^) I would also like to ask for you to leave some comments for me which can be any of the following. 1. Can you relate to the problems I've faced? 2. Which part of the Data Loading as a Service do you like? 3. Which part of the Data Loading as a Service do you not like or have problems agreeing with? 5. Any other comments would also be appreciated.
12-29-2020 09:43:08
12-29-2020 09:43:08
@lhoestq - this might be interesting for you! Any good tips from your side?<|||||>Interesting ! Cool features could be reading from s3, gcp etc. Also maybe memory mapping can help speed up things a bit. Streaming datasets this way is something we'd like to add in the `datasets` library at one point since we're seeing more and more common crawl scale datasets.<|||||>> Interesting ! Cool features could be reading from s3, gcp etc. > Also maybe memory mapping can help speed up things a bit. > > Streaming datasets this way is something we'd like to add in the `datasets` library at one point since we're seeing more and more common crawl scale datasets. This... is something I'd enjoy working on, even for free. But if you already have plans to do it, please don't hesitate to start (*´∇`)<|||||>@mingruimingrui this fits actually well in a pretty cool larger community project we have. Wanna send me your email by DM/email/LinkedIn and I invite you on our slack to chat a bit more about it? I’ll probably make the project open in early January when it’s more solidly defined but I can give you early access.<|||||>@thomwolf I would like that very much (✿◕‿◕)<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,334
closed
[Seq2Seq Templates] Correct some TF-serving errors and add gradient checkpointing to PT by default.
# What does this PR do? This PR improves the Seq2Seq model templates. Notably: - a too model-specific test is removed from PyTorch - gradient checkpointing is added to PyTorch - some tf-serving incompatible statements are removed
12-28-2020 15:15:40
12-28-2020 15:15:40
transformers
9,333
closed
TF Longformer has some graph compilation/execution issue
TF longformer has the following issues to make it 100% graph compilation/execution compliant. I succeed to fix most of the issues but two still remains: 1. The first issue starts at line [1762](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_tf_longformer.py#L1762). The test to know if the inputs needs to be padded prevent the graph to be compiled because `input_ids`, `position_ids` and `input_embeds` can be `None` at the end of the main branch. As a solution I propose to export the padding process (from line 1769 to 1786) outside the `if` as if `padding_len == 0` the calls to `tf.pad(...)` and `tf.concat(...)` will have no effect on the different inputs. 2. The second issue is at line [1527](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_tf_longformer.py#L1527). Here `all_global_attentions` can be either a tuple or `None` in a same execution because `is_global_attn` is not defined globally but during the execution. I don't know how to solve this one. As a first test you can run: ``` from transformers import TFLongformerModel model = TFLongformerModel.from_pretrained("lysandre/tiny-longformer-random", output_attentions=True, output_hidden_states=True) model.save("path") ``` Ping @patrickvonplaten the Longformer expert :)
12-28-2020 14:09:10
12-28-2020 14:09:10
Can we define `is_global_attn` in the config?<|||||>Or can we assume that `is_global_attn == output_attentions` ? The main issue here is that we cannot build a returned value that depends of a tensor that is created during the execution (same issue we had with `output_attentions` and `output_hidden_states` before we decide to take the config values in graph mode)<|||||>I think Longformer has an inherent design problem with TF serving. The variable `is_global_attn` is decided by the user at execution time and depends on the **values** (not just the shape) of `global_attention_mask`. `is_global_attn` is not a boolean to indicate whether the user wants to output the `attentions`, but whether the model will make use of `global_attention`. If TF serving only works when `is_global_attn` has to be known before execution time, then I guess the best option is to add a `config.use_global_attn_tf` that would default to `False`. Could we then add an assert statement that `is_global_attn == config.use_global_attn_tf` with a nice error message saying the in TF serving `config.use_global_attn_tf` has to be set according to the use case? For some more information on the logic, see https://huggingface.co/transformers/model_doc/longformer.html#longformer-self-attention <|||||>Regarding the 1. case: `input_ids`, `position_ids` and `input_embeds` can only be `None` if they have been `None` before entering the function. I don't fully understand your proposed solution, but I think this would be easy to discuss in a PR.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,332
closed
block sparse bert
I got following error while running example usage from https://huggingface.co/madlag/bert-base-uncased-squad1.1-block-sparse-0.09-ampere-v1 Do I need specific torch or transformers setup? Thanks in advance! Some weights of BertModel were not initialized from the model checkpoint at madlag/bert-base-uncased-squad1.1-block-sparse-0.09-ampere-v1 and are newly initialized: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Traceback (most recent call last): File "test.py", line 6, in <module> tokenizer="madlag/bert-base-uncased-squad1.1-block-sparse-0.09-ampere-v1" File "/workspace/transformers/src/transformers/pipelines.py", line 3231, in pipeline framework = framework or get_framework(model) File "/workspace/transformers/src/transformers/pipelines.py", line 107, in get_framework model = AutoModel.from_pretrained(model, revision=revision) File "/workspace/transformers/src/transformers/models/auto/modeling_auto.py", line 698, in from_pretrained pretrained_model_name_or_path, *model_args, config=config, **kwargs File "/workspace/transformers/src/transformers/modeling_utils.py", line 1156, in from_pretrained model.__class__.__name__, "\n\t".join(error_msgs) RuntimeError: Error(s) in loading state_dict for BertModel: size mismatch for bert.encoder.layer.0.attention.self.query.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([256, 768]). size mismatch for bert.encoder.layer.0.attention.self.query.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for bert.encoder.layer.0.attention.self.key.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([256, 768]). size mismatch for bert.encoder.layer.0.attention.self.key.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for bert.encoder.layer.0.attention.self.value.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([256, 768]).
12-28-2020 13:14:52
12-28-2020 13:14:52
I can produce your error. @madlag running the following code: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squad1.1-block-sparse-0.09-ampere-v1", tokenizer="madlag/bert-base-uncased-squad1.1-block-sparse-0.09-ampere-v1" ) predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print(predictions) ``` results in the above error. Any ideas on how to fix it?<|||||>It looks like there is a bug with the "ampere optimized" models I uploaded, thank you for your feedback, I will check what is happening. Right now I would advise you to use the non ampere ones (like madlag/bert-base-uncased-squad1.1-block-sparse-0.13-v1 ), the ampere version is not really good at this time. I am working on this, there should be some new, better and faster models soon, non-ampere optimized ones, then ampere optimized a bit latter. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,331
closed
[WIP] Temp work on pipelines.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-28-2020 13:04:27
12-28-2020 13:04:27
transformers
9,330
closed
Fail to reload tokenizer from save_pretrained method
Hi, to reproduce: ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') tokenizer.save_pretrained(".") tokenizer = AutoTokenizer.from_pretrained(".") ``` with error msg: ``` file ./config.json not found Traceback (most recent call last): File "/data/stars/user/jhou/Test/pytorch_test/huggingface/jc-hou_fork/transformers/src/transformers/configuration_utils.py", line 389, in get_config_dict local_files_only=local_files_only, File "/data/stars/user/jhou/Test/pytorch_test/huggingface/jc-hou_fork/transformers/src/transformers/file_utils.py", line 1015, in cached_path raise EnvironmentError("file {} not found".format(url_or_filename)) OSError: file ./config.json not found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/data/stars/user/jhou/Test/pytorch_test/huggingface/jc-hou_fork/transformers/src/transformers/models/auto/tokenization_auto.py", line 337, in from_pretrained config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) File "/data/stars/user/jhou/Test/pytorch_test/huggingface/jc-hou_fork/transformers/src/transformers/models/auto/configuration_auto.py", line 341, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/data/stars/user/jhou/Test/pytorch_test/huggingface/jc-hou_fork/transformers/src/transformers/configuration_utils.py", line 401, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for '.'. Make sure that: - '.' is a correct model identifier listed on 'https://huggingface.co/models' - or '.' is the correct path to a directory containing a config.json file ``` Thanks. transformers:4.1.0 tokenizers: @mfuntowicz
12-28-2020 12:24:23
12-28-2020 12:24:23
Hi @jc-hou, The `Auto*` classes require the `config.json` (which is saved when you save the model) file to find the correct model/tokenizer class for loading the model/tokenizer. To directly load the tokenizer without the model use the specific tokenizer class, in this case, `BertTokenizer`.<|||||>Hi, thanks. I understand.
transformers
9,329
closed
how to checkpoint all the validation scores in huggingface trainer
Hi I want to find the best model per evaluation score. Could you please give me more info, how I can checkpoint all evaluation scores in each step of training to find the best model? thanks
12-28-2020 09:42:26
12-28-2020 09:42:26
Interested in this as well. Do not find a solution from blog ["How to monitor both train and validation metrics at the same step?"](https://discuss.huggingface.co/t/how-to-monitor-both-train-and-validation-metrics-at-the-same-step/1301).<|||||>> Hi > I want to find the best model per evaluation score. Could you please give me more info, how I can checkpoint all evaluation scores in each step of training to find the best model? thanks I think I figure it out: ```diff training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=3, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=100, ++ evaluation_strategy='steps', ) ```<|||||>> Interested in this as well. Do not find a solution from blog ["How to monitor both train and validation metrics at the same step?"](https://discuss.huggingface.co/t/how-to-monitor-both-train-and-validation-metrics-at-the-same-step/1301). > Hi > I want to find the best model per evaluation score. Could you please give me more info, how I can checkpoint all evaluation scores in each step of training to find the best model? thanks Correspondingly, I put ```diff # https://huggingface.co/transformers/training.html #metric = load_metric('glue', 'mrpc') def compute_metrics(p):#: EvalPrediction # def compute_metrics(p): preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions # preds = np.squeeze(preds) if is_regression else np.argmax(preds, axis=1) preds = np.argmax(preds, axis=1) # if data_args.task_name is not None: # result = metric.compute(predictions=preds, references=p.label_ids) # if len(result) > 1: # result["combined_score"] = np.mean(list(result.values())).item() # return result # elif is_regression: # return {"mse": ((preds - p.label_ids) ** 2).mean().item()} # else: return {"accuracy": (preds == p.label_ids).astype(np.float32).mean().item()} trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset, # evaluation dataset ++ compute_metrics = compute_metrics ) trainer.train() ``` and result in output: ![image](https://user-images.githubusercontent.com/16505983/103236031-a23b5080-4911-11eb-9e38-7da8aaa86c5a.png) <|||||> - However, in terms of `Accuracy`, Not for sure it is on training dataset or validation dataset.<|||||>`Accuracy` here is for validation dataset.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,328
closed
expected str, bytes or os.PathLike object, not NoneType
## Environment info - `transformers` version: 4.1.1 - Platform: Darwin-18.6.0-x86_64-i386-64bit - Python version: 3.7.7 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help tokenizers: @mfuntowicz Trainer: @sgugger --> ## Information Model I am using (Bert, XLNet ...): I don't know The problem arises when using: * [ ] the official example scripts: (give details below) The tasks I am working on is: * [ ] my own task or dataset: (give details below) I was trying to use this for further transfer learning. ## To reproduce Steps to reproduce the behavior(the snippet I used): ``` import deepchem as dc import tensorflow as tf from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("seyonec/SMILES_tokenized_PubChem_shard00_160k") model = AutoModelForMaskedLM.from_pretrained("seyonec/SMILES_tokenized_PubChem_shard00_160k") ``` Then I got this error message: "TypeError: expected str, bytes or os.PathLike object, not NoneType". I appreciate any help/suggestions! Thanks very much.
12-28-2020 08:57:17
12-28-2020 08:57:17
Hey @hjzhang1018, Thanks for your bug report. Could you try to run your command again - I think it should be fixed now: https://huggingface.co/seyonec/SMILES_tokenized_PubChem_shard00_160k/commit/7ef67531cfe96d0e2aa3ae913352c8e9a8c1df4f<|||||>@seyonechithrananda I added some files to your repo here: https://huggingface.co/seyonec/SMILES_tokenized_PubChem_shard00_160k/commit/7ef67531cfe96d0e2aa3ae913352c8e9a8c1df4f - feel free to take a look and see whether this is OK for you. It might be possible that other models require this change as well. <|||||>@hjzhang1018 @patrickvonplaten Hi all, thanks for the fix! I believe the reason this may be happening is because the tokenizer we use is custom (a subclass of BertTokenizer) and thus we run into the issue of fitting directly with AutoTokenizer. We have a PR for a tutorial in the DeepChem [library](https://github.com/deepchem/deepchem/pull/2302), which demonstrates how to call our subclass instead of using AutoTokenizer. If you refer to Part 23 hopefully that is of use! Docs: https://deepchem.readthedocs.io/en/latest/api_reference/tokenizers.html<|||||>Link to SmilesTokenizer class which these models utilize: https://github.com/deepchem/deepchem/blob/master/deepchem/feat/smiles_tokenizer.py#L39-L282<|||||>Just read the changes, it looks like @patrickvonplaten directly converted the vocab.txt file sufficient for BertTokenizer into the vocab.json format necessary for RoBERTa tokenizers, which should run smoothly. I will try this out once I get more time but this fix should work. Thanks a lot for the quick fix!<|||||>Is there a way to add this change to the other models with the 'SmilesTokenizer', @patrickvonplaten? Thanks again for the support.<|||||>@patrickvonplaten Thank you so much for your help! Now this worked! @seyonechithrananda Thank you for the explanation. I'll read the tutorials carefully. Very useful!<|||||>> Is there a way to add this change to the other models with the 'SmilesTokenizer', @patrickvonplaten? Thanks again for the support. Yes on simply has to create an empty 'merges.txt' file and create `vocab.json` from vocab.txt<|||||>Hi, I have the same issues here, I have a custom roberta model and I am using https://github.com/UKPLab/sentence-transformers. Here is the full detail of my problem: https://github.com/UKPLab/sentence-transformers/issues/658 The output after training from this sentence transformers yield files that doesn't contain vocab.json or vocab.txt. But I have a file called `unigram.json` and it looks something like this: ![DC144136-AEAA-48CA-8CC0-D919D37FD6FF_4_5005_c](https://user-images.githubusercontent.com/75713031/103332724-1c044300-4aa6-11eb-860d-799ddd794139.jpeg) ``` { "unk_id": 0, "vocab": [ [ "<unk>", 0.0 ], [ "<sep>", 0.0 ], [ "<pad>", 0.0 ], .... ] } ``` I also faced this TypeError, the same as the title of this issue, when trying to use AutoTokenizer<|||||>In short every Roberta-like Tokenizer requires two files: 1) One merges.txt file. This file describes the BPE algorithm (which letters are merged in which order) 2) One vocab.json file. This file describes the vocabulary. To get an idea of how the format of these files should be, I'd recommend taking a look at some the files in `roberta-base` here:https://huggingface.co/roberta-base/tree/main @seyonechithrananda, I think in your case, you don't need a merges.txt file because of the small vocabulary and because there are no words, just tokens @hjzhang1018 If the other library: UKPLab/sentence-transformers uses the same format for loading/saving files that we do, then file should be renamed to be called `vocab.json` and should have a different format (check out the format here: https://huggingface.co/roberta-base/blob/main/vocab.json).<|||||>So after I call `sentence-transformers` save which gives me back 6 files: 1. I will need to rename `unigram.json` to `vocab.json` 2. Change the format of `unigram.json` to follows `vocab.json` structure 3. Create an empty `merge.txt` file And currently my `unigram.json` contains a word weight: ``` { "unk_id": 0, "vocab": [ [ "<unk>", 0.0 ], [ "<sep>", 0.0 ], [ "<pad>", 0.0 ], [ "<cls>", 0.0 ], [ "<mask>", 0.0 ], [ ",", -3.1215689182281494 ], [ ".", -3.642984628677368 ], [ "a", -4.921720027923584 ], ...... ] } ``` Do I just ignore all the weights and created a new file `vocab.json` with this format ? ``` { "<unk>": 0, "<sep>": 1, "<pad>": 2, "<cls>": 3, "<mask>": 4, ",": 5, ".": 6, "a": 7, ....... } ```<|||||>I started getting this error only about an hour ago without any changes on my side in my old Colab notebook. There must be some version change in Colab env that triggers this error. Any ideas what it might be? <|||||>@lenyabloko could you open a new issue with the issue you're facing and how to reproduce it? Thank you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,327
closed
No module named 'transformers.modeling_albert'
- Platform: Colab - Python version: - PyTorch version (GPU?):GPU - Tensorflow version (GPU?):GPU - Using GPU in script?:Yes examples/token-classification: @stefan-it --> The problem arises when using: * [ ] the official example scripts: I'm following the tutorial "23_Transfer_Learning_With_ChemBERTa_Transformers_Pt_2.ipynb" to reproduce the results. However I got this error message " No module named 'transformers.modeling_albert". I cannot figure out the reason. The following is the snippet I used: ``` !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install --pre deepchem import deepchem deepchem.__version__ from rdkit import Chem !git clone https://github.com/NVIDIA/apex !cd /content/apex !pip install -v --no-cache-dir /content/apex !pip install transformers !pip install git+https://github.com/seyonechithrananda/simpletransformers.git@pip !pip install wandb !cd .. !git clone https://github.com/seyonechithrananda/bert-loves-chemistry.git %cd /content/bert-loves-chemistry import os import numpy as np import pandas as pd from typing import List # import molnet loaders from deepchem from deepchem.molnet import load_bbbp, load_clearance, load_clintox, load_delaney, load_hiv, load_qm7, load_tox21 from rdkit import Chem # import MolNet dataloder from bert-loves-chemistry fork from utils.molnet_dataloader import load_molnet_dataset, write_molnet_dataset_for_chemprop tasks, (train_df, valid_df, test_df), transformers = load_molnet_dataset("clintox", tasks_wanted=None) from simpletransformers.classification import ClassificationModel import logging logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger("transformers") transformers_logger.setLevel(logging.WARNING) ``` Any suggestions and help are appreciated! Thank you!
12-28-2020 07:23:06
12-28-2020 07:23:06
Hey @hjzhang1018, This does not seem to be a bug in Transformers, but rather in `seyonechithrananda/simpletransformers.git` so I'm not sure here is the correct place to post the issue. I think one has to change the line `from transformers.modeling_albert import ....` to `from transformers.models.albert.modeling_albert import ...` in the respective repo.
transformers
9,326
closed
Issue with 'char_to_token()' function of DistilBertTokenizerFast
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.1 - Platform: Google Colab - Python version: 3.8 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.4.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: NA ### Who can help: **tokenizers: @mfuntowicz** ## Information Model I am using DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') to tokenize Squad 2.0 train and validate dataset. The problem arises when using below code snippet to add_token_positions (start and end position) as below from https://huggingface.co/transformers/custom_datasets.html: _def add_token_positions(encodings, answers): start_positions = [] end_positions = [] for i in range(len(answers)): start_positions.append(**encodings.char_to_token(i, answers[i]['answer_start'])**) end_positions.append(**encodings.char_to_token(i, answers[i]['answer_end'] - 1**)) # if None, the answer passage has been truncated if start_positions[-1] is None: start_positions[-1] = tokenizer.model_max_length if end_positions[-1] is None: end_positions[-1] = tokenizer.model_max_length encodings.update({'start_positions': start_positions, 'end_positions': end_positions}) add_token_positions(train_encodings, train_answers) add_token_positions(val_encodings, val_answers)_ The tasks I am working on is: *Training model on SQUaD 2.0 using code given on https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0 ## To reproduce Steps to reproduce the behavior: 1. Follow the steps given on https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0 and then verify start and end position outcome using below code snippet in Expected behavior <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior: - Start and End position are being defined using above code snippet which will be provided as training/validation data to model but end position is not derived as correct value due to some issue with char_to_token() function which is used to find out end position. - Please find below snippet for verification that answer using start and end position after tokenization is not matching with actual answer. - So the training data which is being fed to model after tokenization is incorrect idx=8 print(f'Actual context: {train_contexts[idx]}') print(f'Actual question: {train_questions[idx]}') print(f"Actual answer: {train_answers[idx]['text']}") start_position=train_encodings.char_to_token(idx,train_answers[idx]['answer_start']) end_position =train_encodings.char_to_token(idx,train_answers[idx]['answer_end']) print(f"Answer after tokenization: {tokenizer.convert_ids_to_tokens(train_encodings['input_ids'][idx][start_position:end_position])}") OUTPUT: **Actual context:** Beyoncé Giselle Knowles-Carter (/biːˈjɒnseɪ/ bee-YON-say) (born September 4, 1981) is an American singer, songwriter, record producer and actress. Born and raised in Houston, Texas, she performed in various singing and dancing competitions as a child, and rose to fame in the late 1990s as lead singer of R&B girl-group Destiny's Child. Managed by her father, Mathew Knowles, the group became one of the world's best-selling girl groups of all time. Their hiatus saw the release of Beyoncé's debut album, Dangerously in Love (2003), which established her as a solo artist worldwide, earned five Grammy Awards and featured the Billboard Hot 100 number-one singles "Crazy in Love" and "Baby Boy". **Actual question:** When did Beyoncé rise to fame? **Actual answer:** late 1990s **Answer after tokenization:** ['late', '1990s', 'as', 'lead', 'singer', 'of', 'r', '&', 'b', 'girl', '-', 'group', 'destiny', "'", 's', 'child', '.', 'managed', 'by', 'her', 'father', ',', 'mathew', 'knowles', ',', 'the', 'group', 'became', 'one', 'of', 'the', 'world', "'", 's', 'best', '-', 'selling', 'girl', 'groups', 'of', 'all', 'time', '.', 'their', 'hiatus', 'saw', 'the', 'release', 'of', 'beyonce', "'", 's', 'debut', 'album', ',', 'dangerously', 'in', 'love', '(', '2003', ')', ',', 'which', 'established', 'her', 'as', 'a', 'solo', 'artist', 'worldwide', ',', 'earned', 'five', 'grammy', 'awards', 'and', 'featured', 'the', 'billboard', 'hot', '100', 'number', '-', 'one', 'singles', '"', 'crazy', 'in', 'love', '"', 'and', '"', 'baby', 'boy', '"', '.', '[SEP]', 'when', 'did', 'beyonce', 'rise', 'to', 'fame', '?', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']
12-28-2020 06:53:50
12-28-2020 06:53:50
Hey @PremalMatalia, Could you please provide a copy/paste ready code-snippet that can be used to reproduce the error. By copy/past ready code snippet I mean something like: ```python from transformers import DistilBertTokenizerFast tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') # ... add all the necessary code here to be able to reproduce your error ``` . Thanks! <|||||>Hello Patrick, Please find entire code starting from SQuAD 2.0 training data download to encoding to adding start and end position as below: ```python !pip install wget !pip install transformers==4.0.1 import wget import json from pathlib import Path import os import json from transformers import DistilBertTokenizerFast,TFDistilBertForQuestionAnswering import tensorflow as tf # Import training data !mkdir squad train_source = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json' train_dest = 'squad/train-v2.0.json' wget.download(train_source,train_dest) ## Function to extract context, questions and answers def read_squad(path,dataset='train'): contexts = [] questions = [] question_ids = [] answers = [] blank_counter = 0 append_flag = False with open(path) as f: data = json.load(f) ## Loop over entire dataset for article_id in range(len(data['data'])): paragraphs = data['data'][article_id]['paragraphs'] ## Loop over all the paragraphs for paragraph in paragraphs: context = paragraph['context'] qas = paragraph['qas'] ## Loop over Questions and Answers for qa in qas: append_flag=False question = qa['question'] question_id = qa['id'] ## Select 1st answer if answers are available if qa['answers']: answer = qa['answers'][0] append_flag = True ## Append contexts and questions in a list and answers in a list as dictionary contexts.append(context) questions.append(question) question_ids.append(question_id) answers.append(answer) return contexts, questions, question_ids, answers train_contexts, train_questions,_, train_answers = read_squad('squad/train-v2.0.json') ## Function to update answer_start and answer_end def add_end_idx(answers, contexts): ''' Description: This function is to find out character position at which the answer ends in the passage. Also corrects answer start and end position if the SQuAD answers are off by one or two characters Input: List of all answers, List of all contexts Output: Updated list with answer end position ''' for answer, context in zip(answers, contexts): # Your code here if answer['answer_start'] is None: answer['answer_end'] = None else: answer_text = answer['text'] answer_start = answer['answer_start'] answer_end = len(answer_text) + answer_start #Sometimes answers are off by a character or two if context[answer_start:answer_end] == answer['text']: answer['answer_end'] = answer_end # If the answer text is off by 1 character elif context[answer_start-1:answer_end-1] == answer_text: answer['answer_start'] = answer_start - 1 answer['answer_end'] = answer_end - 1 # If the answer text is off by 2 characters elif context[answer_start-2:answer_end-2] == answer_text: answer['answer_start'] = answer_start - 2 answer['answer_end'] = answer_end - 2 add_end_idx(train_answers, train_contexts) ## Tokenize training data tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True, return_offsets_mapping=True, return_overflowing_tokens=True) ## Find out start_position and end_position in encoded dataset def add_token_positions(encodings, answers): start_positions = [] end_positions = [] for idx in range(len(answers)): start_positions.append(encodings.char_to_token(idx, answers[idx]['answer_start'])) if answers[idx]['answer_end'] is None: end_positions.append(encodings.char_to_token(idx, answers[idx]['answer_end'])) else: end_positions.append(encodings.char_to_token(idx, answers[idx]['answer_end'] - 1)) #if None, the answer passage has been truncated due to words > 512 so setting last position as 511 if start_positions[-1] is None: start_positions[-1] = tokenizer.model_max_length-1 if end_positions[-1] is None: end_positions[-1] = tokenizer.model_max_length-1 encodings.update({'start_positions': start_positions, 'end_positions': end_positions}) add_token_positions(train_encodings, train_answers) ## Validate answers based on start_position and end_position with actual answer for some random index idx=8 print(f'Actual context: {train_contexts[idx]}') print(f'Actual question: {train_questions[idx]}') print(f"Actual answer: {train_answers[idx]['text']}") start_position=train_encodings.char_to_token(idx,train_answers[idx]['answer_start']) end_position =train_encodings.char_to_token(idx,train_answers[idx]['answer_end']) ## ******This shows how start_position and end_position derived by using char_to_token() function is not correct****** print(f"Answer after tokenization: {tokenizer.convert_ids_to_tokens(train_encodings['input_ids'][idx][start_position:end_position])}") ```<|||||>Completely agree with @PremalMatalia. The problem is with - start_positions.append(encodings.char_to_token(i, answers[i]['answer_start'])) We are getting None where as we should have got start token position. char_to_token is not able to convert from string position to token position.<|||||>Thanks @PremalMatalia - I think I can reproduce. The PR attached below should fix the problem. Can you check it again with the proposed fix?<|||||>Thanks @patrickvonplaten for quick action. If I understand correctly, fix has been merged to original char_to_token() function? If yes, we can directly use the same function without any changes in code from myside. Is that correct?<|||||>No the `char_to_token()` function was always correct (It's actually a rust function from tokenizers that is used with python bindings). The function was simply used incorrectly, so I updated the docs.<|||||>> Hello Patrick, > Please find entire code starting from SQuAD 2.0 training data download to encoding to adding start and end position as below: > > ```python > !pip install wget > !pip install transformers==4.0.1 > > import wget > import json > from pathlib import Path > import os > import json > from transformers import DistilBertTokenizerFast,TFDistilBertForQuestionAnswering > import tensorflow as tf > > # Import training data > !mkdir squad > train_source = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json' > train_dest = 'squad/train-v2.0.json' > wget.download(train_source,train_dest) > > ## Function to extract context, questions and answers > def read_squad(path,dataset='train'): > > contexts = [] > questions = [] > question_ids = [] > answers = [] > blank_counter = 0 > append_flag = False > > with open(path) as f: > data = json.load(f) > > ## Loop over entire dataset > for article_id in range(len(data['data'])): > paragraphs = data['data'][article_id]['paragraphs'] > ## Loop over all the paragraphs > for paragraph in paragraphs: > context = paragraph['context'] > qas = paragraph['qas'] > ## Loop over Questions and Answers > for qa in qas: > append_flag=False > question = qa['question'] > question_id = qa['id'] > ## Select 1st answer if answers are available > if qa['answers']: > answer = qa['answers'][0] > append_flag = True > ## Append contexts and questions in a list and answers in a list as dictionary > contexts.append(context) > questions.append(question) > question_ids.append(question_id) > answers.append(answer) > > return contexts, questions, question_ids, answers > > train_contexts, train_questions,_, train_answers = read_squad('squad/train-v2.0.json') > > > ## Function to update answer_start and answer_end > def add_end_idx(answers, contexts): > ''' > Description: This function is to find out character position at which the answer ends in the passage. > Also corrects answer start and end position if the SQuAD answers are off by one or two characters > Input: List of all answers, List of all contexts > Output: Updated list with answer end position > ''' > for answer, context in zip(answers, contexts): > # Your code here > if answer['answer_start'] is None: > answer['answer_end'] = None > else: > answer_text = answer['text'] > answer_start = answer['answer_start'] > answer_end = len(answer_text) + answer_start > > #Sometimes answers are off by a character or two > if context[answer_start:answer_end] == answer['text']: > answer['answer_end'] = answer_end > # If the answer text is off by 1 character > elif context[answer_start-1:answer_end-1] == answer_text: > answer['answer_start'] = answer_start - 1 > answer['answer_end'] = answer_end - 1 > # If the answer text is off by 2 characters > elif context[answer_start-2:answer_end-2] == answer_text: > answer['answer_start'] = answer_start - 2 > answer['answer_end'] = answer_end - 2 > > add_end_idx(train_answers, train_contexts) > > ## Tokenize training data > tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') > train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True, > return_offsets_mapping=True, > return_overflowing_tokens=True) > > ## Find out start_position and end_position in encoded dataset > def add_token_positions(encodings, answers): > start_positions = [] > end_positions = [] > > for idx in range(len(answers)): > start_positions.append(encodings.char_to_token(idx, answers[idx]['answer_start'])) > if answers[idx]['answer_end'] is None: > end_positions.append(encodings.char_to_token(idx, answers[idx]['answer_end'])) > else: > end_positions.append(encodings.char_to_token(idx, answers[idx]['answer_end'] - 1)) > > #if None, the answer passage has been truncated due to words > 512 so setting last position as 511 > if start_positions[-1] is None: > start_positions[-1] = tokenizer.model_max_length-1 > if end_positions[-1] is None: > end_positions[-1] = tokenizer.model_max_length-1 > > encodings.update({'start_positions': start_positions, 'end_positions': end_positions}) > > add_token_positions(train_encodings, train_answers) > > ## Validate answers based on start_position and end_position with actual answer for some random index > idx=8 > print(f'Actual context: {train_contexts[idx]}') > print(f'Actual question: {train_questions[idx]}') > print(f"Actual answer: {train_answers[idx]['text']}") > > start_position=train_encodings.char_to_token(idx,train_answers[idx]['answer_start']) > end_position =train_encodings.char_to_token(idx,train_answers[idx]['answer_end']) > > ## ******This shows how start_position and end_position derived by using char_to_token() function is not correct****** > print(f"Answer after tokenization: {tokenizer.convert_ids_to_tokens(train_encodings['input_ids'][idx][start_position:end_position])}") > ``` Since Pytorch removed SAVE_STATE_WARNING now it will pop up an error if install transformers==4.0.1. I use transformers>=4.5 and it works
transformers
9,325
closed
Add FAVOR+ / Performer attention
## What does this PR do? Adds support for the Performer / FAVOR+ attention mechanism, as described in the paper "Rethinking Attention with Performers" by Choromanski et al., 2020. Fixes #7675. ## How is it implemented? Since Performer attention can be an unbiased estimator of traditional softmax attention, and pretrained models can be finetuned to work with it, the general consensus in the discussion on #7675 was that it should not be implemented as a single separate Transformer model. Ideally, we want all or most models in the transformers library to be able to use Performer attention. In view of this, I've implemented the feature by creating three new classes: `PerformerAttention`, `TFPerformerAttention`, and `PerformerAttentionConfig`. These are implemented in the files `modeling_performer_attention.py`, `modeling_tf_performer_attention.py`, and `configuration_performer_attention.py` respectively in `src/transformers`. Models are marked as supporting Performer attention by adding the `@supports_performer_attention` class decorator to the corresponding config class. This decorator adds the `attention_type: str` and `performer_attention_config: Optional[Union[dict, PerformerAttentionConfig]]` attributes to the config class, and also adds some boilerplate code to the class's `to_dict()` method to make sure JSON serialization works properly. It also registers the class so that the user can get a full list of Performer attention-supporting models with the function `performer_supporting_models_and_configs()`. This isn't quite enough to get Performer attention to work for a new model, though. Adding Performer support to a model is inherently a somewhat tedious process, but I've tried to make it less tedious by implementing a `@init_performer_attention()` function decorator which can be added to the `__init__` method on the immediate parent of an attention module within a model— this will initialize either the model's own softmax attention module, or a `PerformerAttention` module, depending on how `attention_type` is set. You can see how this is implemented in `performer_attention_utils.py`. This is all that you need to do for some models, although others will need a bit of extra work due to idiosyncracies in their implementation. I've already added Performer support to the following models: DistilBERT, BERT, RoBERTa, ELECTRA, LayoutLM, and TAPAS (in both PyTorch and TensorFlow). My hope is that other contributors will add support to other models relatively quickly. Unit tests can be found in `test_performer_attention.py`. They do an exhaustive grid search of the enum and boolean config options and make sure that none of the 4.6k+ combinations causes a crash or a shape mismatch, and also make sure that the PyTorch and TensorFlow implementations have the same output, within numerical error, under all configurations. ## Rough edges While I added extensive docstrings to `PerformerAttention` and `PerformerAttentionConfig`, which can be used to generate documentation, I haven't actually made the documentation files themselves. That will have to be left to another contributor, or to my future self— although honestly I've put quite a lot of time into this PR and would like to get on to other projects, so I would really appreciate it if someone else did it. `PerformerAttention` supports using a custom CUDA kernel from the `fast_transformers` library to implement causally masked attention, although I have never actually been able to test this functionality because I don't have root access to the GPU server I use and therefore can't install NVCC. I'm hoping a reviewer could do that— it's a relatively straightforward feature so if there are any bugs in it it should be pretty easy to fix. Also, the current code throws some odd linter errors which I haven't been able to figure out how to resolve and which don't seem to be consequential. Something about the code in RoBERTA, LayoutLM, etc. that is marked as being copied from BERT not matching the BERT code exactly. If a reviewer could figure out how to silence that error that would be greatly appreciated. ## Who can review? @patrickvonplaten commented on #7675 and seemed excited about the PR, so I think he would be a good reviewer for this.
12-28-2020 06:13:18
12-28-2020 06:13:18
@norabelrose Great job! Thanks for letting me know about this :)<|||||>@norabelrose - you've done an amazing job here! Having Performer in PyTorch is a huge contribution. I completely understand that you've already invested a lot of time in making this PR and we're happy to complete your PR!<|||||>also pinging @TevenLeScao here. In terms of next steps, I think we should do the following (We're happy to take over those tasks @norabelrose :-)) : - Check that the added PyTorch and Tensorflow Performer self-attention yields identical results as the flax version: Compare Bert model to this Performer BertFlax model: https://github.com/huggingface/transformers/pull/8358 - Fine-tune some pre-trained weighs to be compatible with Performer attention (ideally Bert or DistilBert)<|||||>Now, the painful discussion on how to integrate Performer. **Context**: Performer attention is special in the sense that it is fully compatible with a pre-existing model architecture and does not require any weights to be different from normal attention. This means that a `bert-base-cased` model does not require any changes in its architecture to use Performer's attention. The only change will be how the weights are used to compute the self-attention layer output. One can easily see this on the original code base: https://github.com/google-research/google-research/tree/master/performer/fast_attention/jax#jax-variant-of-favor where only the attention function has to be changed to `make_fast_softmax_attention` with no changes required to the parameters dict. This is a huge argument for simply making Performer's self-attention available to all models by changing their respective `modeling_....py` file. **My opinion on the integration into Transformers** Nevertheless, I'm in favor of implementing Performer **only** in a stand-alone file (at least at first), a.k.a. `PerformerModel` or maye in this case `PerformerBertModel`, which is **different** from the current version of the PR. I've the following arguments: - It's the standard in Transformers to add a new model for a new attention function. We've done the same for Longformer even though the Longformer attention could have been added for each model. It's easier for users to navigate between models, *e.g.* Performer will have its own model page vs. some docstring in utils. - It's actually not that easy to convert an existing BERT-like model to a Performer-BERT model. E.g. le'ts say we integrate Performer attention into `modeling_bert.py`. If one wants to convert the model to performer attention, the user would have to manually copy the positional embeddings (which are limited to 512 in Bert) to be as long as 64K+. We could write a convert function for this, but this convert function would probably be different for each model. - Performer cannot support all the functionalities of Bert. This means if we integrate Performer into Bert, a ```BertModel.from_pretrained(...., is_performer=True)``` model will not have all the functionalities that a Bert model will have, such as `output_attentions=True`, `is_decoder=True`, `is_encoder_decoder=True` -> Performer never creates the complete attention_mask so the `ouput_attentions=True` functionality gets lost, Performer does not support Encoder-Decoder out-of-the-box without requiring more if-else clauses. This will necessarily lead to many issues and some `if self.is_performer` code in BERT which I don't want to do. - Performer is still a very novel feature that is still somewhat experimental IMO. If Performer really takes off, we can always integrate the Performer attention more deeply into the library as proposed in this PR. The `modeling_bert.py` code is now used by 100K+ people, so I want to be very very careful with changes to this code especially. It's just a safer option to have a standalone Performer model in the beginning IMO. - I don't really think that users are interested to be able to use Performer Attention for all models. I think the models of interest will be `Bert` (`DistilBERT`), `GPT2`, `T5`, and `Bart`. Some models will never be used with Performer, such as Reformer, XLNet, Transfo-XL, Longformer, ConvBert, Routing Transformer, LED. I'd be thrilled you hear your opinions here @norabelrose, @sgugger, @LysandreJik, and @thomwolf<|||||>Thanks for this amazing contribution! I think long document classification and summarisation tasks are an important use case for this so having performer attention for some representative models in those scenarios would be fantastic. Personally I am looking forward to using performer attention with Roberta sequence and token classification models, but I understand not every model can get performer support right away so it would be great to also have a few examples on how we could add performer attention to other models ourselves, if possible. Really excited about this, thanks so much!<|||||>@patrickvonplaten Thank you for the thoughtful feedback. I understand your concerns about building Performer attention right into existing models like BERT. On the other hand, as @onclue mentioned, having only one model that supports Performer attention would really restrict the usefulness of the feature. It seems like there should be some "compromise" option here. What if we just added simple Performer-supporting subclasses to a few different models, something like this: ``` @supports_performer_attention class PerformerBertConfig(BertConfig): pass ``` and have `PerformerBertModel` be a subclass of `BertModel` that uses the following module for its attention mechanism: ``` class PerformerBertAttention(BertAttention): @init_performer_attention_bertlike(BertSelfAttention) def __init__(self, config): super().__init__(self, config) ``` And then the same process could be done for RoBERTa, DistilBERT, GPT-2, etc. I recognize that trying to add Performer support to "all" models is sort of silly and wouldn't work, but there are quite a few models that would benefit from it. It would also be nice if `PerformerAttention` and `PerformerAttentionConfig` remained public APIs, as they are in this PR, so that users could just take the attention mechanism and drop it into whatever custom model they want.<|||||>Thanks for your answer @norabelrose! I understand your point. The goal should definitely to support all the "highly" used models: DistilBERT, BERT, RoBERTa, T5, Bart I think we need to dive a bit deeper into the PR and play around with Performer to see how to best integrate the model, but in general the philosophy of the library has always been: - Model files should be kept as independent from each other as possible - Readability is more important than the drawback of duplicated code -> so we don't mind it too much if we duplicate Performer code across 5 or so model files - We try to minimize "magic" internal functionalities that are hard to understand when first seeing the code to a minimum meaning that we're not a huge fan of function decorators for important functionalities in general. But we'll have to dive deeper into the PR to get a better understanding here - sorry for being so slow here! Also, is there already a model that has successfully been fine-tuned to "long" inputs?<|||||>@norabelrose Where do the keys, queries & values come from when calling the Performer attention? def call(self, query, key, value, mask=None, head_mask=None, output_attentions=False): The call method of the Bert attention its replacing (i guess) takes the hidden states instead and then calculates q,k,v; How does it work from a coding perspective that it can have different call inputs? & Should i just feed in the hidden states 3x for q, k, v when using performer attention? <|||||>To get the TFPerformerAttention working I had to had to apply three fixes: - Swap all shape calls for shape_list - Add mask = tf.reshape(mask, shape=(shape_list(k_prime)[0], shape_list(k_prime)[2])) in compute_attention_with_projected_queries_and_keys due to problems with the extended attn mask - Remove the reshapes in _finalize_attention_output, as we need the shape to stay in [..., num_heads, dim_per_head]-like shape perhaps it helps sb else // @norabelrose can correct me if im doing sth wrong<|||||>Hello, Amazing work @norabelrose! I have been trying your performer implementation. I have copied your attention implementation ```PerformerAttention``` and have replaced that attention with the normal self-attention in Mobilebert. I have tracked some metrics with respect to other implementations. I have seen that the memory consumption on 512 tokens long it consume the same memory that the normal self attention. And it is also the same fast. I have logged the metrics with Wandb: https://wandb.ai/gaceladri/new_berts/reports/Memory-and-speed-comparison--Vmlldzo0NDA4MTI Does that makes sense? I have seen in Long Range Arena https://arxiv.org/abs/2011.04006 that it is 1.2x faster with 1k tokens but I have not tried with that long. The point where I am confused is with the memory consumption. At shorter values, the attention mechanism, being linear with respect to sequence length, not should be consuming less memory?<|||||>@norabelrose I tried the `TFPerformerAttention` with some minor adaptions and it works fine during training. I must say it is a very nice implementation 👍 However, when I train my model, I save the weights at each checkpoint, and I quantize it into `model.pb` as well as into a `TF-Lite` model. When loading all the models again (from saved weights, quantized and tensorflow lite), the output of the model with loaded weights differ from the rest. Any idea why this is the case? <|||||>@gcuder Would you mind sharing your code? I have been getting speed & memory improvements but the TFPerformer doesn't really converge... <|||||>What are the plans for this MR @patrickvonplaten ?<|||||>Based on @norabelrose great work, I set up a fork with the performer as a separate model at [https://github.com/Muennighoff/transformers](https://github.com/Muennighoff/transformers). I removed the decorators but kept the separate performer attention config. For now I only added Distilbert, the question being whether we should add new performer_xyz folders for each model or fit them in one performer folder. It can just be used as `from transformers import DistilBertPerformerModel, DistilBertPerformerConfig` `configuration = DistilBertPerformerConfig()` `model = DistilBertPerformerModel(configuration)` Here's an example notebook comparing the distilbert perf/trans performance on seq. classification: [https://colab.research.google.com/drive/1o8ioYUIvvIol7PXrDguftQtHyCpyDSJL#scrollTo=JGxH15LIN66M](https://colab.research.google.com/drive/1o8ioYUIvvIol7PXrDguftQtHyCpyDSJL#scrollTo=JGxH15LIN66M) perhaps it can help us bring this forward? @marrrcin @patrickvonplaten <|||||>Hey guys, sorry I don't really have the bandwidth to take a closer look here at the moment, but it's definitely on my ToDo List! One thing that would be extremely useful would be to have a script that shows how a pretrained model such as `distilbert` can be extended to a its performer version and subsequently be fine-tuned for long-range sequence modeling. @TevenLeScao ran some initial experiments and didn't find the fine-tuning to be that easy...<|||||>Can it be made compatible with T5? As far as I know Performer and relative attention together is an open research question.<|||||>How about this? If I understand correctly Performer calculates `Q' * (K' * V)` instead of `softmax(Q * K) * V` (Q: queries, K: keys, V: values, *: matmul). T5 calculates `softmax(Q * K + B) * V` (B: relative positional biases). A new kind of model that initializes most of its weights from T5 could calculate `(softmax(Q * K) + B') * V = (Q' * K' + B') * V = Q' * (K' * V) + B' * V`. This way at least the first term can be calculated with FAVOR+ and the second term is much smaller/faster to calculate even if its complexity is quadratic. `B'` could be initialized in a way that on average the activations in the training set remain unchanged. We loose backward compatibility so more finetuning is necessary.<|||||>> How about this? If I understand correctly Performer calculates `Q' * (K' * V)` instead of `softmax(Q * K) * V` (Q: queries, K: keys, V: values, *: matmul). T5 calculates `softmax(Q * K + B) * V` (B: relative positional biases). I could calculate `(Q' * K' + B') * V` but then I would not gain much from using FAVOR+. But if I calculate `Q' * (K' * V) + B' * V` then at least the first term can be calculated with FAVOR+ and the second term is much smaller/faster to calculate even if it's quadratic complexity. `B'` can be initialized with `B` and finetuned. that's an interesting idea! I will try to add T5 to the repo I set up so we can experiment with that; Currently for some reason only distilbert converges, while bert doesn't (https://colab.research.google.com/drive/1o8ioYUIvvIol7PXrDguftQtHyCpyDSJL#scrollTo=9pom5i196Bwg), so i need to figure that out first;; if anybody got bert to work let me know!<|||||>I refactored a lot of code & now works like a charm for me Here's a large notebook with TF & Torch comparisons for Perf/Trans on BERT, DBERT, T5 on short sequences (SST2 Dataset): https://colab.research.google.com/drive/1A9reiUZbA7DELuJ8keTo73sIQ4dJJVoT?usp=sharing To add a new model all one needs to do is: - Copy over the config, tf/torch modeling file of the model you want favor attention for to its own folder [here](https://github.com/Muennighoff/transformers/tree/master/src/transformers/models/performer) - Add a modelXPerformer Config (should be about the same as the current modelXperformer configs) - Init favor attention directly in the self attention module (i.e. one level lower than in the implementation of this PR) - this is preferrable as it scales better to models with different attention modules / linear layers - feed q,k,v,mask after their linear layers through the favor attention to get back the final attention output - Remove all the softmax business😃 - Last thing to adapt is the (extended) attention mask -- We want the shape to be (bs, 1, seq_len, 1) instead of (bs, 1, 1, seq_len) & we don't want to fill it with the -infs, i.e. just leave it as 1's & 0's, as we multiply it not add it - If you want to import it with `from transformers import ModelXPerformer` like the current performer models, rename the models & add them to the `__init__.py` in the performer folder & transformer parent folder Reg. T5: - Based on @marton-avrios proposal, I added T5 - it got a bit more complex due to the attn mask so I compute: - `Q' @ ((K' * M) @ V) + (B * M) @ V` & it converges🎉 - However, B is a matrix of (bs, n_heads, L, L) where L is the seq len so it scales quadratically with seq len, the exact problem performers try to solve ;_; - Removing the Rel Pos Encoding entirely surprisingly has about the same performance and is much faster (i got a 30% speedup for 1000 seq_len) - Still need to test it for EncDec model; Another option is just using bert's abs. pos. embeddings.; The pretrained model shouldnt be affected much Decoders: - Only worked with T5Encoder so far; The code for doing causal perf. attention should be there, so adding EncDec / GPT-2 like models should be pretty simple; If somebody wants to try it let me know!; We just need to be careful with the attn mask when we implement it A couple pointers if you dont get the desired performance: - The approximation error propagates through the layers, i.e. the more layers the worse it may get (A 6-layer bert performer gives me about as good a performance as a 12-layer one) - Try increasing the random features & the feature drawing interval - The more random features the better the softmax approximation, though it also gets more expensive - Make sure the masking is correct!<|||||>I'm sorry I haven't responded to mentions on this PR recently— I've been quite busy with an unrelated project. Thank you @Muennighoff for all your hard work extending/refining the PR! I just merged your changes.<|||||>wow, great work @Muennighoff ! Regarding T5: - despite still being quadratic complexity have you measured any speed/memory improvements compared to vanilla T5? Or significantly worse (better?) performance? In vanilla T5 there are 2 computations of quadratic complexity: `QK` and `B` but the calculation of `QK` plays a much bigger role in the overall speed of T5. Also it is calculated (and stored) in every layer while `B` is only calculated (and stored) in the first layer. - when you mention 30% speedup and same performance by removing relative positional attention is it a PerformerT5 compared to a PerformerT5 without it or a vanilla T5 compared to a vanilla T5 without it? Because if it is a PerformerT5 comparison then I think it means that it cannot learn meaningful weights for `B` anyway.<|||||>> wow, great work @Muennighoff ! > > Regarding T5: > > * despite still being quadratic complexity have you measured any speed/memory improvements compared to vanilla T5? Or significantly worse (better?) performance? In vanilla T5 there are 2 computations of quadratic complexity: `QK` and `B` but the calculation of `QK` plays a much bigger role in the overall speed of T5. Also it is calculated (and stored) in every layer while `B` is only calculated (and stored) in the first layer. > * when you mention 30% speedup and same performance by removing relative positional attention is it a PerformerT5 compared to a PerformerT5 without it or a vanilla T5 compared to a vanilla T5 without it? Because if it is a PerformerT5 comparison then I think it means that it cannot learn meaningful weights for `B` anyway. Check out the T5 Tensorflow experiments here: https://colab.research.google.com/drive/1A9reiUZbA7DELuJ8keTo73sIQ4dJJVoT?usp=sharing For each configuration (performer/transformer /// raw/pretrained) i ran it w/ & w/o pos bias, but only on short seq task of sst-2. For the 1000 seq len task i mentioned, i ran only performer encoders (w/ & w/o pos bias) on byte-text level classification from the LRA paper & they had the same performance within +- 1% accuracy. i'm not yet sure what to make of it; I think we could confirm that they are of no use after training a full t5 enc-dec model in performer mode & benchmarking that<|||||>I realized a mistake in my formulation which would explain why PerformerT5 could not make use of `B'`. Vanilla T5 calculates this: `inverse(D) * exp(Q * t(K) + B) * V` - ...which is equivalent to `softmax(Q * t(K) + B) * V` - ...where `D = diag(exp(Q * t(K) + B) * 1L)`, `t()` is the transpose function and `1L` is the all 1 vector of length L. I propose to calculate this: `inverse(D) * (exp(Q * t(K)) + B') * V` - ...which is equivalent to `inverse(D) * exp(Q * t(K)) * V + inverse(D) * B' * V` - ...where `D = diag(exp(Q * t(K)) * 1L + B' * 1L)` - ...and when finetuning `B'` should not be initialized from `B` but randomly instead. I propose to approximate it with: `inverse(D') * Q' * t(K') * V + inverse(D') * B' * V` - ...where `D' = diag(Q' * t(K') * 1L + B' * 1L)`. My previous (incorrect) approximation was: `inverse(D') * Q' * t(K') * V + B' * V` - ...which approximates `inverse(D) * exp(Q * t(K)) * V + B' * V` - ...and NOT `inverse(D) * exp(Q * t(K)) * V + inverse(D) * B' * V`.<|||||>> * inverse > I realized a mistake in my formulation which would explain why PerformerT5 could not make use of `B'`. > > Vanilla T5 calculates this: `inverse(D) * exp(Q * t(K) + B) * V` > > * ...which is equivalent to `softmax(Q * t(K) + B) * V` > * ...where `D = diag(exp(Q * t(K) + B) * 1L)`, `t()` is the transpose function and `1L` is the all 1 vector of length L. > > I propose to calculate this: `inverse(D) * (exp(Q * t(K)) + B') * V` > > * ...which is equivalent to `inverse(D) * exp(Q * t(K)) * V + inverse(D) * B' * V` > * ...where `D = diag(exp(Q * t(K)) * 1L + B' * 1L)` > * ...and when finetuning `B'` should not be initialized from `B` but randomly instead. > > I propose to approximate it with: `inverse(D') * Q' * t(K') * V + inverse(D') * B' * V` > > * ...where `D' = diag(Q' * t(K') * 1L + B' * 1L)`. > > My previous (incorrect) approximation was: `inverse(D') * Q' * t(K') * V + B' * V` > > * ...which approximates `inverse(D) * exp(Q * t(K)) * V + B' * V` > * ...and NOT `inverse(D) * exp(Q * t(K)) * V + inverse(D) * B' * V`. Yeah you're right the previous approximation wasn't correct; I also forgot to include it in D when doing the code experiments; We could try the approximation you propose. Another angle could be: Since `exp(Q @ t(K) + B) = exp(Q @ t(K)) * exp(B)` and `exp(Q @ t(K)) ~ ϕ(Q) @ t(ϕ(K))` i think we can do `exp(Q @ t(K) + B) ~ ϕ(Q) @ t(ϕ(K)) * exp(B) ` but rearranging `(ϕ(Q) @ t(ϕ(K)) * exp(B)) @ V ` to avoid calculating Q @ K first is a pain <|||||>Is this close? My teammates and I want to use performers in T5<|||||>I was just looking through the code and this is stuff of legends! Great work. In the T5 implementation, I noticed that performer attention forward method is called with position bias, yet it is not currently a valid parameter. Is that residual from the conversation about the above position bias conversations? EDIT: Ignore the above, I was looking at the wrong implementation of `PerformerAttention`<|||||>> I was just looking through the code and this is stuff of legends! Great work. > > In the T5 implementation, I noticed that performer attention forward method is called with position bias, yet it is not currently a valid parameter. Is that residual from the conversation about the above position bias conversations? > > EDIT: Ignore the above, I was looking at the wrong implementation of `PerformerAttention` I removed the position bias temporarily, as not using it at all worked best. I havn't tried @marton-avrios most recent idea though, so perhaps somebody might want to try it and report back. If you only need an Encoder T5, you should be able to work with what's there -- For Encoder-Decoder, The causal decoder is currently still prohibitively expensive due to the for loop & cumsum operation (@mymusise and me are working on it [here](https://github.com/mymusise/gpt2-quickly/issues/5)). Let us know if you get the decoder to perform! <|||||>I'm thinking about the position bias, and it doesn't seem like there's a good way to include it. What's been mentioned above seems correct, that the mathematical starting point is (Q'K'^T * B')V, where B' = e^B (elementwise) But, this can't be computed without computing Q'K'^T first, defeating the purpose. The alternative is to add some position encoding into each of Q' and K' (a la 'Attention Is All You Need'). I think this is the only / best way to do position bias in this context. That said, it would be getting kind of wonky / outside the spirit of the performers paper, so I'm not sure position bias should even be allowed in this PR. Do you all agree with this?<|||||>Hi, I have been trying to run finetuning with `T5PerformerForConditionalGeneration` using this pull request branch, and I have got few minor issues or questions I wanted to ask about. 1. Merge conflict comments were still left under `/src/transformers/__init__.py`, which is not a serious issue. 2. After getting the attention output from `PerformerAttention`, I had to add `unshape` call it to concat the head attentions and multiply to `W0` in `forward()` of `T5Attention`. I found original call to unshape was commented out since it included matmul of `V`. 3. Both in encoder and decoder, I was getting matrix multiplication exception by wrong dimension on the line when multiplying(in `PerformerAttention`) `mask` to `k_prime`. Was this the reason why @norabelrose mentioned T5 Decoders is not fully working yet? I am trying to fix this attention mask issue for the decoder, but for encoder case, is transposing the attention mask the right way to fix? > Decoders: > > Only worked with T5Encoder so far; The code for doing causal perf. attention should be there, so adding EncDec / GPT-2 like models should be pretty simple; If somebody wants to try it let me know!; We just need to be careful with the attn mask when we implement it<|||||>> Hi, I have been trying to run finetuning with `T5PerformerForConditionalGeneration` using this pull request branch, and I have got few minor issues or questions I wanted to ask about. > > 1. Merge conflict comments were still left under `/src/transformers/__init__.py`, which is not a serious issue. > 2. After getting the attention output from `PerformerAttention`, I had to add `unshape` call it to concat the head attentions and multiply to `W0` in `forward()` of `T5Attention`. I found original call to unshape was commented out since it included matmul of `V`. > 3. Both in encoder and decoder, I was getting matrix multiplication exception by wrong dimension on the line when multiplying(in `PerformerAttention`) `mask` to `k_prime`. Was this the reason why @norabelrose mentioned T5 Decoders is not fully working yet? I am trying to fix this attention mask issue for the decoder, but for encoder case, is transposing the attention mask the right way to fix? > > > Decoders: > > Only worked with T5Encoder so far; The code for doing causal perf. attention should be there, so adding EncDec / GPT-2 like models should be pretty simple; If somebody wants to try it let me know!; We just need to be careful with the attn mask when we implement it I think 1,2 & 3 are all fixed here: https://github.com/Muennighoff/transformers ; The masking for the decoder however may not yet work <|||||>> > Hi, I have been trying to run finetuning with `T5PerformerForConditionalGeneration` using this pull request branch, and I have got few minor issues or questions I wanted to ask about. > > > > 1. Merge conflict comments were still left under `/src/transformers/__init__.py`, which is not a serious issue. > > 2. After getting the attention output from `PerformerAttention`, I had to add `unshape` call it to concat the head attentions and multiply to `W0` in `forward()` of `T5Attention`. I found original call to unshape was commented out since it included matmul of `V`. > > 3. Both in encoder and decoder, I was getting matrix multiplication exception by wrong dimension on the line when multiplying(in `PerformerAttention`) `mask` to `k_prime`. Was this the reason why @norabelrose mentioned T5 Decoders is not fully working yet? I am trying to fix this attention mask issue for the decoder, but for encoder case, is transposing the attention mask the right way to fix? > > > > > Decoders: > > > Only worked with T5Encoder so far; The code for doing causal perf. attention should be there, so adding EncDec / GPT-2 like models should be pretty simple; If somebody wants to try it let me know!; We just need to be careful with the attn mask when we implement it > > I think 1,2 & 3 are all fixed here: https://github.com/Muennighoff/transformers ; > The masking for the decoder however may not yet work Thanks @Muennighoff, I ended up using your forked project and currently tyring to test `T5PerformerForConditionalGeneration`. I think finetuning seems to run fine with the `CausalDotProduct` from `pytorch-fast-transformer`, although I did not check the loss or verified anything yet. When you mentioned that the docoder might not yet work, did you mean that the loss or performance of the model is not there yet even though we can run it? Also, is there any reason why further fixes have not been merged into this pull request?<|||||>I've been in [this issue](https://github.com/mymusise/gpt2-quickly/issues/5) the past few days, and it seems like everything works on the Pytorch end now that `CausalDotProduct` is used (like @ice-americano was saying). Is the only remaining issue doing causal dot product in tensorflow?<|||||>@JamesDeAntonis Causal dot product in TensorFlow would require rewriting the CUDA C++ code on the fast-transformers end. That would be quite a bit of work and would probably require actually creating an entirely separate library, since the fast-transformers library is purely PyTorch. I don't think the code would belong in huggingface/transformers since my sense is that there is a general policy against including custom CUDA kernels directly in the transformers library itself. That's why I originally coded it so that `PerformerAttention` would just detect if fast-transformers is installed, and use it if it is. To be honest I'm hesitant to recommend merging this PR in its current form, mainly because of all the other fast linear attention algorithms that have come out in the past few months, such as the [Nystromformer](https://www.youtube.com/watch?v=m-zrcmRd7E4) and this [ostensibly improved variation](https://arxiv.org/abs/2103.02143) on the Performer random feature approach that came out a couple weeks ago. I'm all in favor of getting fast attention mechanisms out there in an easy to use format, it seems wrong to create multiple different copies of the big models (BERT, T5, GPT2, etc.), one for each new fast attention mechanism that comes out. It's not at all clear to me that the Performer will end up becoming the dominant fast attention mechanism— it does well on [the benchmarks](https://github.com/google-research/long-range-arena), but it's not the best, and the fact that it includes non-trainable orthogonal random matrices which you have to routinely redraw in order for the model to converge is definitely sub-optimal. My ideal solution, I think, would be to do broader refactoring of the huggingface/transformers codebase so that you could more easily plug in different attention mechanisms and reuse code, but that seems to directly go against the philosophy of the maintainers. So idk what the best solution is. It might be best to make a fork of the fast-transformers library itself, which was actually built to be an extensible framework for different types of fast attention/transformers, and just copy and paste most of the BERT/T5/GPT2/whatever implementations from this repo into that fork. Or maybe the fast-attention maintainers would actually be open to including implementations of those big models in their repo— I haven't asked.<|||||>>(1) That's why I originally coded it so that PerformerAttention would just detect if fast-transformers is installed, and use it if it is. I'm fine with this because my team uses pytorch. That said, I think this would make TF performer really slow (potentially not a big deal because it could still be memory efficient) >(2) because of all the other fast linear attention algorithms that have come out in the past few months, such as the Nystromformer and this ostensibly improved variation on the Performer random feature approach that came out a couple weeks ago >(3) more easily plug in different attention mechanisms and reuse code I agree that the ideal scenario for implementing efficient attn in huggingface/transformers is to refactor the codebase so that the attention computation always bottles up in one place. Then, we could have all attn algos in one place, implemented only once for plug-and-play on any model. I share your sense that they wouldn't go for that. (@patrickvonplaten is that right?) >(4) It might be best to make a fork of the fast-transformers library itself, which was actually built to be an extensible framework for different types of fast attention/transformers, and just copy and paste most of the BERT/T5/GPT2/whatever implementations from this repo into that fork. I might be on board, but wouldn't this make things equally messy? e.g. you would still have the issue of having to rewrite a whole bunch of code. There would be an improvement in having attention be bottled up in one place for easy efficient transformer plug-and-play, but also at a further annoyance of having all of it outside of hf<|||||>@JamesDeAntonis It looks like the BigBird linear attention model [just got merged](https://github.com/huggingface/transformers/pull/10183) and will be included in the next release. BigBird actually outperforms the Performer on the Long Range Arena benchmark, and it doesn't require repeatedly sampling orthogonal random features during training. Ironically, as the author of this pull request, I'd like to close the PR. I don't think this library should be cluttered with another linear attention mechanism that has no obvious benefits over BigBird. That said, I'm glad to see that others found my code useful and were able to fork it and use it for their own projects.<|||||>Thanks a lot for your efforts, @norabelrose . I am a bit sad that Performers won't make it in after all this work, but I understand the reasoning. Cheers!<|||||>Is it possible to use sequences > 512 tokens with pretrained Bert to see the effective improvement of performer?
transformers
9,324
closed
Music Transformers
# 🚀 Feature request Hello guys! Thanks for your amazing work on the transformers! This is very needed and appreciated :) I wanted to ask if it is possible to add a section/transformers dedicated specifically to Music. I searched GitHub and your model's repo but I could not find even a single model/solution that would be suitable for music. NLP models are most capable when it comes to Music AI and I think it would be a great feature/section/branch to investigate/cover. ## Motivation OpenAI MuseNet and Google Music Transformer. Enough said I think. If you never tried either, you have been really missing out. AFAIK MuseNet is built on a custom GPT2-like model/architecture. And Google used XLNet I think. ## Your contribution I was able to create a decent model/code/implementation of Music AI based on GPT2 model/architecture. You are welcome to check it out here: https://github.com/asigalov61/Intelligent-VIRTUOSO I used minGPT implementation to do it and it turned out quite capable and nice :) However, I do want to ask the following: 1) What is the best Hugginface model/architecture you can recommend for Music AI applications? Please be specific and please give me a simple example of how to try it. This will be very much appreciated. I want to use Hugginface Transformers, so whatever works, please let me know. I am attaching a sample text file so that you can see my encoding, but I can adjust easily to any specs/needs of Huggnface Transformers. I have heard that BERT would be best at something like this but I can be mistaken... 2) Is there a nice Google Colab to try? I would prefer a simple working example to Python repos... 3) What would be the most optimal settings/hyperparameters you can recommend for GPT2 (right now I follow minGPT guidelines) and also what can you recommend to try for the most suitable Huggnface Transformer? I really hope to hear constructive suggestions/advice because I want to learn and improve my skills and knowledge. Plus I love music almost as much as I love computers so I am quite passionate about both and would love to connect with others who are into Music and AI, if you guys exist... Thank you very much in advance for your time and responses. [TMIDI-TXT-Composition (13).txt](https://github.com/huggingface/transformers/files/5745869/TMIDI-TXT-Composition.13.txt)
12-28-2020 04:40:28
12-28-2020 04:40:28
Hey @asigalov61, I think applying `Transformers` to Music is a super cool idea! Regarding the best model to use for music composition, IMO it depends strongly on: - What is the input to the model? Do you input tokens or float vectors? - How long is the input? *e.g.* how many float vectors or tokens? GPT2 is limited to 1024 tokens / float vectors -> is this too short? - For generation (composition), I think only our `autoregressive models` make sense: https://huggingface.co/transformers/model_summary.html#autoregressive-models so mostly GPT2. For "classification" it would mostly be BERT. If you need very long inputs, it would be interesting to check-out ReformerLM: https://huggingface.co/google/reformer-enwik8 I bet people would be very interested in Transformers + Music. We've created a new examples folder structure for such projects, so feel free to open a PR to add a dir "music_transformers" here: https://github.com/huggingface/transformers/tree/master/examples/research_projects<|||||>Hey Patrick, Thank you for your help/guidance and for the welcome 🙂 I think I have created the proper PR for the new dir as you have suggested. Please check it and let me know if it is ok. I am new to PRs so it's still kinda difficult to do it right sometimes 🙂 What can I add there? Can I add my GPT2 implementation there? I should put it in a separate dirs there? Right? Regarding your questions for me: 1) I am sorta working with what is available so atm I just use existing implementations. So I usually use implementation's way of feeding the model. I.e. for my GPT2 implementation, I use the minGPT char-based approach which is painfully slow and inefficient. minGPT does not have BPE yet so I can't really improve it and do it properly as it is very complex for me and difficult. This is why I was very interested in your work cuz you guys provide a standardized and easy way to do it. So basically in my GPT2 implementation I simply feed it the text char tokens. I have attached the example of input/output in my original post. Check it out if you can, please. 2) I figured that GPT2 is most capable (OpenAI did the same thing with MuseNet). So I was wondering if you guys have a nice GPT2 version that is tuned to the limit. This would really help. Also I need better tokenizer but I do not know how to do it. So if you can help/give me specific pointers, I will really appreciate it. 3) I most certainly heard about the Reformer. And it would be super cool to try it. But again, I have no idea how to make it compatible with the text input/text tokens I use. So if you can help, this also will be much appreciated. Again, thank you for your advice. Most sincerely, Alex ________________________________ From: Patrick von Platen <[email protected]> Sent: Monday, December 28, 2020 4:35 AM To: huggingface/transformers <[email protected]> Cc: Alex <[email protected]>; Mention <[email protected]> Subject: Re: [huggingface/transformers] Music Transformers (#9324) Hey @asigalov61<https://github.com/asigalov61>, I think applying Transformers to Music is a super cool idea! Regarding the best model to use for music composition, IMO it depends strongly on: * What is the input to the model? Do you input tokens or float vectors? * How long is the input? e.g. how many float vectors or tokens? GPT2 is limited to 1024 tokens / float vectors -> is this too short? * For generation (composition), I think only our autoregressive models make sense: https://huggingface.co/transformers/model_summary.html#autoregressive-models so mostly GPT2. For "classification" it would mostly be BERT. If you need very long inputs, it would be interesting to check-out ReformerLM: https://huggingface.co/google/reformer-enwik8 I bet people would be very interested in Transformers + Music. We've created a new examples folder structure for such projects, so feel free to open a PR to add a dir "music_transformers" here: https://github.com/huggingface/transformers/tree/master/examples/research_projects — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/9324#issuecomment-751698309>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ANNXLI3QSD2OFJT5JQ4FTGLSXB3PVANCNFSM4VLQPGAA>. <|||||>Hello! I’d be willing to contribute work in this space if anyone would like to collaborate. In my previous life I was a professional audio engineer, now I’m an enterprise AI systems architect. https://www.paulprae.com/<|||||>@praeducer Hey Paul! Thank you for responding to this thread. I would love to collab and create something based on hugginface implementations so if you can help, I would really appreciate it. Basically, huggingface docs are very convoluted and unclear to me atm so if you can create a working collab with GPT2 hugginface implementation, I can take it from there and add music parts to it. I need something similar to my own GPT2 implementation but based on huggingface so that we can add it here and contribute to their repo/library. This is what I have and this is what I need: https://github.com/asigalov61/Optimus-VIRTUOSO And my attempt to use huggingface implementation is posted above in the thread so check it out also. Thanks a lot. Looking forward to working together with like-minded people. Alex. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,323
closed
[T5 model parallel] implement input auto-relocation + lots of refactoring/cleanup
As I commented on in another incarnation of generalizing t5 model parallelism https://github.com/huggingface/transformers/pull/9316 so that it could be easily ported to other models I realized that it's quite unnecessary to try and remap inputs to specific devices where they will be needed in the future ahead of time. Since we have `forward` where we have access to the device of the parameters of that layer - we can completely automate the relocation of inputs to the correct devices just before `forward` is called. So this PR builds upon https://github.com/huggingface/transformers/pull/9316 and: * [x] creates `@model_parallel_inputs_to_device` decorator used for `forward`, which automatically takes any inputs and puts them on the same device as the parameters of that layer. This allowed a complete removal of most of the `.to()` juggling logic for inputs, which was quite complex and noisy. * [x] a lot of refactoring to make the MP as little invasive and noisy as possible, and fixing some small issues on the way. I have tested this with: ``` pyt -sv tests/test_modeling_t5.py -k parallel ``` Which I'm not sure covers all bases, but the above tests pass. @alexorona, please let me know what you think. And if you have real applications besides the great tests you wrote please see if it still works correctly. (It was so awesome having those tests in place! Thank you!) If it looks good and others support this proposal we can then look at doing the same for gpt2 and meanwhile I will look at bart. @patrickvonplaten, @LysandreJik
12-28-2020 03:02:15
12-28-2020 03:02:15
I don't have an in-depth knowledge of our model parallelism features, so it would be great if @LysandreJik can take a look here as well. I think in general, I'm in favor of this PR. However, I'm not sure if a function decorator is better than just having two lines of ``` if self.is_parallel: # call map to device function ``` in the respective forward function. We've decided against using function decorators in Pytorch at multiple points (gradient checkpointing e.g.), so I'm not convinced it's the better option to do it here. Function decorators do reduce code readability quite a lot IMO.<|||||>I'm not sure how your suggestion would work since it needs to be generic, and once inside `forward` the function args are no longer generic. Remember, I'm trying to build a generic functionality that can work out of the box in any `transformers` model and not specific to t5. The other approach that doesn't need a decorator is to override `self.__call__` via `self.parallelize` to set to a variation of this wrapper. ``` def parallelize(self, device_map=None): $self.__call__ = model_parallel__call__ [...] def deparallelize(self): $self.__call__ = nn.Module.__call__ [...] ``` and: ``` def model_parallel__call__(self, *input, **kwargs): # get device of any of the params of this layer try: device = next(self.parameters(recurse=True)).device except StopIteration: device = None if device is not None: input = list(input) for i, v in enumerate(input): if v is not None: input[i] = v.to(device) input = tuple(input) for k in kwargs.keys(): if kwargs[k] is not None and torch.is_tensor(kwargs[k]): kwargs[k] = kwargs[k].to(device) return nn.Module.__call__(self, *input, **kwargs) ``` (or could save the original `self.__call__` to be more flexible and to allow for others to override this too) this in fact is even better since it will have 0 impact on non-MP functionality as this wrapper will be called only under MP.<|||||>This is great progress, @stas00! From my perspective, to create a general way of doing model parallelism, we need four things: * a format for `device_map` that can be used on any model * `device_map` and `model_parallel` need to be attributes on all models, probably by assigning them to `PreTrainedModel` * `parallelize()` and `deparallelize()` should be on all models, again probably by assigning them to `PreTrainedModel` * changes to the forward methods need to be abstracted if at all possible (this is by far the most challenging) This PR makes a lot of progress, the strongest of which is a potential abstraction/simplification of the changes to the forward method. Not sure if a decorator is the solution. @LysandreJik will have that insight when he's back. I like the suggestion by @patrickvonplaten that it's instead a two line implementation `if self.model_parallel` instead of a decorator. But the BIG thing is if most or all of the code in the forward method can be replaced with with something like: ``` if self.model_parallel: hidden_states, attention_mask, position_bias, encoder_hidden_states, encoder_attention_mask, encoder_decoder_position_bias, head_mask, past_key_value = _call__mp(hidden_states, attention_mask, position_bias, encoder_hidden_states, encoder_attention_mask, encoder_decoder_position_bias, head_mask, past_key_value) ``` If we can get that right, it might turn model parallelism from a day or weekend project per model into something that takes a few minutes. Much more scalable and sustainable. Supporting non-sequential GPUs could be more trouble than its worth -- not entirely sure on this, it's just my instincts. With the billion + parameter models that we're dealing with -- and all indications are that it's only getting bigger going forward -- it's pretty fair to say that most workflows in enterprise and research will be: 1. develop locally on a machine with one or maybe two GPUs on a small sized version of a model, and 2. train a final model on a cloud instance or cluster with multiple identical GPUs. Sequential hand-offs between GPUs will be the norm in cases like that, which I think are going to be most of them. The other thing I worry about is a challenge with PyTorch 1.5 and 1.6 model parallelism behavior. The seemingly redundant clauses and `set_device` statements are there to prevent PyTorch's inferential logic from moving modules or inputs around after `.to()` assignments have been called. It's very annoying. I don't know if it's fixed in 1.7. You'll notice that output layers like the `lm_head` are always on the first device instead of the last device. A more logical workflow would have the embedding layers on the first device and the output layers on the last device. I got that to work just fine in forward passes, but I must've tried 10 different ways to get it to behave in backprop before conceding that for whatever reason PyTorch's quantum device superposition just wouldn't allow it. So the output layers are on the same device as the embedding layers. You'd think that matters for load balance between GPUs, and it does -- for gpt2-xl. But since we're practically limited in most situations to PyTorch's 8 GPU per machine preference (inherited from CUDA), by the time you're at 3 billion parameters the embedding and `lm_head` layers are so small in comparison to the attention blocks that it doesn't matter that they're both on the first device, and a custom `device_map` solves the problem for cases where that matters. The implementation implies that there is an extra hand-off or two of a large tensor between GPUs, but I don't think having a perfectly optimal setup will save even 10% on training time. Happy to be proven wrong on this though. What will save a TON of time and $$$ though is deepspeed integration. I got t5-11b with 1024 tokens to train quickly on the new p4 instance AWS released last month with its epic 320 GB of GPU memory so I was like "ok fine whatever... that's pretty good". <|||||>That's awesome, @alexorona. Do continue to share your insights from the frontier! Let's wait for @LysandreJik to come back to plan ahead and meanwhile I will experiment with Bart. > The seemingly redundant clauses and set_device statements are there to prevent PyTorch's inferential logic from moving modules or inputs around after .to() assignments have been called. It's very annoying. I don't know if it's fixed in 1.7. Oh, so glad you flagged that. Would it be enough to run the existing parallel tests with pt-1.5, pt-1.6 to detect these failures? I'm developing on pt-nightly since rtx-30* work only there (well 1.7.1 should be usable too, but mainly waiting for cuda-11.2 support in pytorch, which is again pt-nightly - won't be in 1.7.x). But it means I can't use it with older pt versions. But since we have to support pt-1.4, I will then put `set_device` back as you had them originally. But this time let's add specific comments why there are there, otherwise someone like myself will think they are some left-overs from earlier experimentation and swipe them away. Actually, I think we should have a design document where we explain why this or that is done. Rather than make a lot of noise in the model files. A developer-oriented doc. The `set_device` was just one thing, right? Or have I naively nuked any other essentials? Thanks again!<|||||>@alexorona, one more question. If pt-1.7+ removes the need for jumping through hoops, as you're suggesting older versions have all kinds of issues, perhaps it'd be a reasonable approach to make MP in `transformers` require pt-1.7? If and when you get some time could you please test if what wasn't working in pt < 1.7 works in pt-1.7? And if not - perhaps we need to file some Issues with pytorch if there are bugs to be solved. Thank you.<|||||>@stas00 Will try to do so, but in the middle of moving so I don't think I'll get to this until the end of January at the soonest. The team would have to make the call about only support model parallelism for PyTorch >= 1.7.0 if it won't work on earlier versions. I would be very tempted to support that idea, but don't have enough usage information to know what the impact would be.<|||||>I guess once everybody is back next week we can start having some discussion with the HF team. Have an easy move!<|||||>@stas00 Yeah, should be able to get some input when everyone is back. In the meantime, I'm still not sure on the final form of the `device_map`. There are two issues left to work out: 1. Some models don't have decoder architectures 2. No ability to map embeddings and output layers (always on first device), which _might_ be just fine. I think most output layers and embeddings are going to be comparatively small to attention blocks going forward, but we should confirm that. We are allowing people to create custom a `device_map` that should enable them to get around any potential situations where the first device is becoming overloaded. To confirm, this looks good for decoder architectures: ``` device_map = { encoder: { 0: [0, 1, 2, 3, 4, 5], 1: [6, 7, 8, 9, 10, 11] }, decoder: { 2: [0, 1, 2], 3: [3, 4, 5] } } ``` Maybe we use the keys to map to the attribute? In gpt2, `self.h` contains the attention blocks, so: ``` device_map = { h: { 0: [0, 1, 2, 3, 4, 5], 1: [6, 7, 8, 9, 10, 11] } } ``` In trying to generalize `parallelize()`, we still need access to list of all modules. For example, in `GPT2LMHeadModel`, we would need to know: `self.lm_head`, `self.transformer.h`, `self.transformer.wte`, `self.transformer.wpe` and `self.transformer.ln_f`. <|||||>I haven't looked into gpt2, yet. t5 and bart are very similar structure-wise. We probably need to map out all the different archs `transformers` has and then generalize. What is emerging so far is that the device map might have various keys, none required, and each model architecture will have: 1. its required keys 2. its own default map generator - so that the user doesn't have to provide one and overtime it can be improved to have smarts to create a balanced map based on the "insider" information. So if some architectures need to explicitly manage the mapping of non-block/layers, rather than just assigning them by default on the "`main_device`", because they are significantly big, they could do that too. Otherwise, leave the `main_device` to all the "smallish-fish" and use the other devices for "the big fish" if that makes sense. The main advantage of this "lazy" approach is that there is less device-hopping and less code needed to match the hopping.<|||||>Yes, that's right. So it turns out the `self._modules` attribute has all of the modules. To move `parallelize()` to `PreTrainedModel`, I think all we need is a per-model `module_map` object to map between the `device_map` and the model placements. With a little work, we might be able to reduce making a model parallel to: 1. Adding a few lines of code in the forward method per your work 2. Modifying the validation function to check for errors in a custom `device_map` 3. Creating a `module_map` dictionary for that model and adding it to the `get_module_map()` function We can embed special placement rules where non-attention block modules need to be on the same device as another module by creating a tuple in `module_map['dependent_modules']`: ``` # Device map for GPT2LMHead. T5 would have 'encoder', 'decoder' as keys instead of 'h' and validate_device_map would # check to see if the device_map has the right keys. device_map = { 'h': { 0: [0, 1, 3, 4], 1: [5, 6, 7, 8], 2: [9, 10, 11, 12] } } class PreTrainedModel(): ... # Probably use get_model_map(), but just to make it simple: self.module_map = { 'h': self._modules['transformer'].h, 'embeddings': [ self._modules['transformer'].wte, self._modules['transformer'].wpe ], 'dependent_modules': [ ( self._modules['transformer'].ln_f], model._modules['transformer'].h[-1], ), ( self._modules['lm_head'], self._modules['transformer'].wte ) ] } def parallelize(self, device_map = None): self.device_map = device_map # validate_device_map extended to check for valid keys for model ... # Set all embeddings to first device if 'embeddings' in self.module_map: for layer in self.module_map['embeddings'].items(): layer.to(self.first_device) # Assign attention blocks to the appropriate device. for module_group, group_map in self.device_map.items(): for device, layers in group_map.items(): for layer in layers: self.module_map[module_group][layer].block_parallelize(f"cuda:{device}") # Some modules should always be on the same device as another module. We can express # this as a tuple pair where tuple[0] needs to be on tuple[1] if 'dependent_modules' in self.module_map: for i in self.module_map['dependent_modules']: i[0].to(i[1].device) ``` <|||||>All, awesome suggestions that should be looked at next once the current work has been merged. I'm going to wait implementing anything new, since there are already too many partial PRs that need to be carefully merged and rebased and once that is done we can do another round of generalization integrating your suggestions.<|||||>So if there is no objection, I will merge this one, and then start integrating with https://github.com/huggingface/transformers/pull/9384, which is ahead functionality-wise - so I want to sync the two, switching t5 to the improved version of MP backend. I will implement the suggestions in that new PR.<|||||>As we have discovered the original PR didn't make t5 work with trainer. I have just fixed that in the last commit here, bringing some goodies over from the Bart MP PR. So this now works: ``` export BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 200 --n_val 200 --n_test 200 --fp16 --save_steps 2 --model_parallel ``` But! while it's fine in the training stage, it's 10x slower on eval than w/o `--model_parallel`<|||||>Hello @stas00 kudos to all the hard work you do, especially around continuing the ambitious work around supporting parallelism. Interested in doing some inference with the t5-11b model variant. Can you provide some insights on how many gpus would be needed to achieve that? I tried this branch with 8xV100 (16gb) on GCE. All good while I created the model and called parallelize, but got a out of memory error on inference step when moving inputs to the first gpu device. Let me know if I have a wrong mental model about achieving this. Thanks again! <|||||>Thank you for the kind words, @kznmft! Please have a look at https://github.com/huggingface/transformers/pull/9765 which implements a very inefficient in my opinion but nevertheless working pipeline parallelism on t5, which should be superior to this naive implementation, speed-wise but it's not quite there yet. Please read the first post carefully for all the details. and you can see the follow up comments with the experiments that have been done. So 4x40gb A100s gpus weren't enough for t5-11b in initial experiments. But 5-6 of those probably should be enough. I finally got access to a machine with 4 gpus just now, so I'm going to start looking at implementing 2D parallelism - using Pipeline with DeepSpeed ZeRO-DP, so I will post news once I get something working. Subscribe to watch https://github.com/huggingface/transformers/pull/9765 and I most likely will update that PR with new info or a link to a new PR once I have something working. ----- > I tried this branch with 8xV100 (16gb) on GCE. > All good while I created the model and called parallelize, but got a out of memory error on inference step when moving inputs to the first gpu device. But you're not telling me the device map you were using. You need to spread out the layers over the 8 gpus, have you done it? unless you were relying on the default map which should spread things out. The problem is that it doesn't take into an account that gpu 0 is always overtaxed, so I'd always try a few layers less on the first gpu 0. And then watch nvidia-smi (and later we will have better tools) to see that you get each GPU getting a somewhat equal memory allocation. But if 4x40 couldn't fit it, I doubt that 8x16 will. Remember in t5-11b you have 45GB of params, plus optimizer states plus gradients. Also probably need to try to use a more lean optimizer, say Adam instead of AdamW which needs more memory. <|||||>too long. closing.
transformers
9,322
closed
Conda dependencies conflict with pip dependencies
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1, 3.3.1 - Platform: Windows 10, Anaconda - Python version: 3.8 ## Information I'm installing a package built on top of `transformers` v3 in an Anaconda environment. The package is not available on Anaconda Cloud so I have to install it via `pip`. According to [the best practice](https://www.anaconda.com/blog/using-pip-in-a-conda-environment), I try to install as many requirements as possible with `conda`, including `transformers`. However, it turns out that the conda dependencies conflict with pip dependencies for `transformers` so that pip would try to downgrade the conda-installed `tokenizers` package, which `transformers` depends on. The dependency information is as follows: <table style="width:100%"> <tr> <th>`transformers` version</th> <th>conda dependency</th> <th>pip dependency</th> </tr> <tr> <td>3.5.1</td> <td>tokenizers==0.9.4</td> <td>tokenizers==0.9.3</td> </tr> <tr> <td>3.3.1</td> <td>tokenizers==0.9.3</td> <td>tokenizers==0.8.1rc2</td> </tr> </table> I can't seem to find any resolution other than leaving the `transformers` installation to pip completely. Is there any other possible resolution? ### Who can help Maybe @mfuntowicz can help ## To reproduce Steps to reproduce the behavior: 1. Run "conda install transformers=3.5.1" or "conda install transformers=3.3.1" 2. Run "pip check" ## Expected behavior Make conda dependencies compatible with pip dependencies.
12-27-2020 23:59:30
12-27-2020 23:59:30
Hey @ZOUG, Thanks for the issue. @LysandreJik is on holiday at the moment, but I'm sure he's more than happy to take a look when he's back :-) <|||||>Hello! We have started officially maintaining the anaconda packages in version v4.0.0. Installing a version anterior to that one would result in you using the `transformers` version from another channel (such as `conda-forge`), which we do not maintain. Do you get the same error when installing `transformers` from our channel (on a more recent version)?<|||||>> Hello! We have started officially maintaining the anaconda packages in version v4.0.0. Installing a version anterior to that one would result in you using the `transformers` version from another channel (such as `conda-forge`), which we do not maintain. > > Do you get the same error when installing `transformers` from our channel (on a more recent version)? No, the error does not occur on the most recent version. The problem is that packages dependent on `transformers` may not be compatible with v4.x at the moment so that the error will still arise. It might be better to provide a pip installation package for v3.5.1 that is compatible with the `conda-forge` dependencies. In my case, I got lucky that the package that I need just released a new version today that is compatible with transformers v4.x.<|||||>I believe this is still the case in Docker-based environments (ex. Kaggle). I removed existing transformers and tokenizers, installed new ones (transformers 4.2.1 and tokenizers 0.9.4). In the code, it goes back to conda and complains about tokenizers being 0.9.3 ``` /opt/conda/lib/python3.7/site-packages/transformers/__init__.py in <module> 41 42 # Check the dependencies satisfy the minimal versions required. ---> 43 from . import dependency_versions_check 44 from .file_utils import ( 45 _BaseLazyModule, /opt/conda/lib/python3.7/site-packages/transformers/dependency_versions_check.py in <module> 39 continue # not required, check version only if installed 40 ---> 41 require_version_core(deps[pkg]) 42 else: 43 raise ValueError(f"can't find {pkg} in {deps.keys()}, check dependency_versions_table.py") /opt/conda/lib/python3.7/site-packages/transformers/utils/versions.py in require_version_core(requirement) 92 """ require_version wrapper which emits a core-specific hint on failure """ 93 hint = "Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master" ---> 94 return require_version(requirement, hint) 95 96 /opt/conda/lib/python3.7/site-packages/transformers/utils/versions.py in require_version(requirement, hint) 85 if want_ver is not None and not ops[op](version.parse(got_ver), version.parse(want_ver)): 86 raise pkg_resources.VersionConflict( ---> 87 f"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}" 88 ) 89 VersionConflict: tokenizers==0.9.4 is required for a normal functioning of this module, but found tokenizers==0.9.3. ``` Edit: found a work around to re-import modules: ``` import importlib, pkg_resources, tokenizers importlib.reload(pkg_resources) importlib.reload(tokenizers) ``` tqdm may also complain if 4.50 or later.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Issue still persists --------------------------------------------------------------------------- VersionConflict Traceback (most recent call last) <ipython-input-24-3b738e6ed358> in <module> ----> 1 from transformers import PreTrainedTokenizerFast /app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _find_and_load(name, import_) /app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) /app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _load_unlocked(spec) /app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _load_backward_compatible(spec) /app1/anaconda3/envs/praveen_tfu/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/__init__.py in <module> 41 42 # Check the dependencies satisfy the minimal versions required. ---> 43 from . import dependency_versions_check 44 from .file_utils import ( 45 _BaseLazyModule, /app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _find_and_load(name, import_) /app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) /app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _load_unlocked(spec) /app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _load_backward_compatible(spec) /app1/anaconda3/envs/praveen_tfu/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/dependency_versions_check.py in <module> 39 continue # not required, check version only if installed 40 ---> 41 require_version_core(deps[pkg]) 42 else: 43 raise ValueError(f"can't find {pkg} in {deps.keys()}, check dependency_versions_table.py") /app1/anaconda3/envs/praveen_tfu/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/utils/versions.py in require_version_core(requirement) 92 """ require_version wrapper which emits a core-specific hint on failure """ 93 hint = "Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master" ---> 94 return require_version(requirement, hint) 95 96 /app1/anaconda3/envs/praveen_tfu/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/utils/versions.py in require_version(requirement, hint) 85 if want_ver is not None and not ops[op](version.parse(got_ver), version.parse(want_ver)): 86 raise pkg_resources.VersionConflict( ---> 87 f"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}" 88 ) 89 VersionConflict: tokenizers==0.9.4 is required for a normal functioning of this module, but found tokenizers==0.11.6. Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master
transformers
9,321
closed
Splitting texts longer that `tokenizer.max_length` into blocks of same size
## Environment info `transformers-cli env` raises an ModuleNotFoundError, though I don't think it is relevant for my problem. - `transformers` version: 4.0.0 - Platform: Arch Linux x86_64 - Python version: 3.9.1 - CPU only ### Who can help It's a probably trivial tokenizer problem: @mfuntowicz using a pretrained bert: @LysandreJik ## Information I'm running successfully (exemplary for several models): ` tokenizer = AutoTokenizer.from_pretrained("oliverguhr/german-sentiment-bert") model = AutoModelForSequenceClassification.from_pretrained("oliverguhr/german-sentiment-bert") inputs = tokenizer(text, return_tensors = "pt") proOrCon = self.model(**inputs) ` Now I have several `text`s that produce more than 512 tokens. I tried to split the `inputs` manually by copying and modifying as well as creating a dict in the same format, but apparently, the class object stores additional information that are required and not easily accessible. I Also tried built-in functions from the Tokenizer: `inputs = tokenizer(text, return_tensors = "pt", max_length=512, stride=0, return_overflowing_tokens=True, truncation=True, padding=True) mapping = inputs.pop('overflow_to_sample_mapping') ` But I don't get how to use the mapping for the next iteration, It's just a tensor with as many entries as tokens, counting up from 0. I've looked at the documentation (@sgugger) here https://huggingface.co/transformers/internal/tokenization_utils.html and here https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer but the output format does not exactly match my results, since I don't get the overflowing tokens, just the mapping. I tried to look at the flair library as well, since it already implements something similar for transformers but their approach seems to be for another data format too. ( https://github.com/flairNLP/flair/blob/4d1bfec296ae8000268f8bbf62d71042e3714ace/flair/embeddings/token.py#L949 ) Can someone tell me what I am doing wrong? I just want to split the tokens in sizes that a bert model (512) can handle (blocks or sliding-window, I will have to test what works best). I didn't think it would be that hard, but I spent already a whole day on this.
12-27-2020 23:35:48
12-27-2020 23:35:48
I think this notebook could help you: https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb you should check out the `def tokenizer(...)`, `def group_texts(...)` functions. I think they should help at what you want to achieve. <|||||>Thank you for the fast response @patrickvonplaten. I reviewed your link only to find out it was an input problem on my side that I did not see before. Sorry to bother you for that. Just in case anyone comes across a similar issue here is the solution I found to be working for me. ``` class german_bert_sentiment: """ Sentiment analyzer module based on a range of sources including twitter, facebook, product reviews https://huggingface.co/oliverguhr/german-sentiment-bert?text = Du+Arsch%21 """ def __init__(self, truncate=False): self.tokenizer = AutoTokenizer.from_pretrained("oliverguhr/german-sentiment-bert") self.model = AutoModelForSequenceClassification.from_pretrained("oliverguhr/german-sentiment-bert") self.truncate=truncate self.max_length=512 def analyze(self, text): averages=[] errors=[] inputs = self.tokenizer(text, return_tensors = "pt")#, max_length=512, stride=0, return_overflowing_tokens=True, truncation=True, padding=True) length=len(inputs['input_ids'][0]) while length>0: if length>self.max_length: next_inputs={k: (i[0][self.max_length:]).reshape(1,len(i[0][self.max_length:])) for k, i in inputs.items()} inputs={k: (i[0][:self.max_length]).reshape(1,len(i[0][:self.max_length])) for k, i in inputs.items()} else: next_inputs=False proOrCon = self.model(**inputs) weights = proOrCon[0].detach().numpy()[0] weights[2], weights[1] = weights[1], weights[2] weights = softmax(weights) average=np.average(np.linspace(1, -1, 3), weights = weights) averages.append(average) errors.append( np.sqrt(np.average(np.array(np.linspace(1, -1, 3)-average)**2, weights = weights)) ) #from IPython import embed; embed() if self.truncate: break if next_inputs: inputs=next_inputs else: break length=len(inputs['input_ids'][0]) average = np.average(averages, weights = 1./np.array(errors)**2) error = np.sqrt(1./np.sum(1./np.array(errors)**2)) return [average, error] ```
transformers
9,320
closed
[Seq2SeqTrainer] Fix Typo
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes a bug when one does not want to use `generate()` to evaluate in Seq2SeqTrainer. This PR probably deserves a test, but leaving this for a future PR when @sgugger is back. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-27-2020 20:48:59
12-27-2020 20:48:59
transformers
9,319
closed
Some weights of AlbertForPreTraining were not initialized from the model checkpoint at albert-base-v2 and are newly initialized: ['sop_classifier.classifier.weight', 'sop_classifier.classifier.bias']
Albert leverage sentence coherence predict loss(SOP) to optimizer its parameters, and I wanna employ it to score the coherence between two sentences. But when I use AlbertForPreTraining to load albert-xxlarge-v2 checkpoint, it reminds me that: _Some weights of AlbertForPreTraining were not initialized from the model checkpoint at albert-base-v2 and are newly initialized: ['sop_classifier.classifier.weight', 'sop_classifier.classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference._ I try many times and find the output of each time is different, this means that the final linear layer for classification has not been loaded and just initial by random. I wanna know how to leverage pretrain SOP classification without fine-tuning? Hope to get response rapidly.
12-27-2020 16:44:17
12-27-2020 16:44:17
I'm not sure we have a completely fine-tuned SOP classification model. My best advice is to try out different models and see which model not randomly allocates the weights for those layers.
transformers
9,318
closed
Fail when running the multimodal example
Hi, I tried to run the [multimodal example](https://github.com/huggingface/transformers/tree/master/examples/research_projects/mm-imdb). By running: ``` python run_mmimdb.py \ --data_dir ../dataset/ \ --model_name_or_path bert-base-uncased \ --output_dir ../output \ --do_train \ --do_eval \ --max_seq_len 512 \ --gradient_accumulation_steps 20 \ --num_image_embeds 3 \ --num_train_epochs 100 \ --patience 5 \ --overwrite_output_dir ``` I met the following error message: ``` 12/27/2020 16:01:33 - INFO - __main__ - ***** Running training ***** 12/27/2020 16:01:33 - INFO - __main__ - Num examples = 15513 12/27/2020 16:01:33 - INFO - __main__ - Num Epochs = 100 12/27/2020 16:01:33 - INFO - __main__ - Instantaneous batch size per GPU = 8 12/27/2020 16:01:33 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 160 12/27/2020 16:01:33 - INFO - __main__ - Gradient Accumulation steps = 20 12/27/2020 16:01:33 - INFO - __main__ - Total optimization steps = 9700 Epoch: 0%| | 0/100 [00:00<?, ?it/s/data/stars/user/jhou/collection-stars/anaconda3/envs/pytorch_stable_171_pip_cu102/lib/python3.6/site-packages/PIL/Image.py:2837: DecompressionBombWarning: Image size (96592500 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack. DecompressionBombWarning, Iteration: 0%| | 0/1940 [00:02<?, ?it/s] Epoch: 0%| | 0/100 [00:02<?, ?it/s] Traceback (most recent call last): File "run_mmimdb.py", line 572, in <module> main() File "run_mmimdb.py", line 525, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer, criterion) File "run_mmimdb.py", line 151, in train outputs = model(**inputs) File "/data/stars/user/jhou/collection-stars/anaconda3/envs/pytorch_stable_171_pip_cu102/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/data/stars/user/jhou/collection-stars/anaconda3/envs/pytorch_stable_171_pip_cu102/lib/python3.6/site-packages/transformers/models/mmbt/modeling_mmbt.py", line 366, in forward return_dict = return_dict if return_dict is not None else self.config.use_return_dict File "/data/stars/user/jhou/collection-stars/anaconda3/envs/pytorch_stable_171_pip_cu102/lib/python3.6/site-packages/torch/nn/modules/module.py", line 779, in __getattr__ type(self).__name__, name)) torch.nn.modules.module.ModuleAttributeError: 'MMBTForClassification' object has no attribute 'config' ``` torch:1.7.1 transformers:4.0.1 I tried with torch:1.7.0, transformers:4.1.0, also failed with the same error. Any adivce? Thanks.
12-27-2020 14:59:36
12-27-2020 14:59:36
That example is unfortunately unmaintained. Have you tried playing around with LXMERT, which is also a multi-modal model? There is a demo available [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/lxmert).<|||||>Oh, I didn't know there is one with LXMERT. I will try that. Thanks.<|||||>You can make it work with a little inference call modification. Add **"return_dict": False** to **inputs** dict. Like this: ``` inputs = { "input_ids": batch[0], "input_modal": batch[2], "attention_mask": batch[1], "modal_start_tokens": batch[3], "modal_end_tokens": batch[4], "return_dict": False } outputs = model(**inputs) ```
transformers
9,317
closed
Bug: metrics inside on_evalute callback is passed wrongly
Hi this is very helpful to save all metrics every eval_step with evaluation_stategy = steps, for this I wrote a callback as follows to access metrics inside this callback: ``` class EvaluationCallback(TrainerCallback): def on_evaluate(self, args, state, control, **kwargs): print("### kwargs ", kwargs['metrics']) # ``` {'eval_loss': 972.89990234375, 'eval_acc': 0.0} I pass this callback to the trainer.py from what I see this metrics does not match the output of evaluate function, for instance in my case this is `{'boolq_eval_loss': 525.3097534179688, 'boolq_eval_acc': 60.6, 'rte_eval_loss': 972.89990234375, 'rte_eval_acc': 0.0} ` could you tell me how I can access output of evaluate() inside this callback as this is? thanks
12-27-2020 14:12:06
12-27-2020 14:12:06
Issue resolved with adding this line in evaluate function: self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, results) <|||||>reopened, since this is not solved after this line, indeed this looks like a bug, could you have a look please?<|||||>Hello, could you please put all of your environment information as asked in the template, as well as the command you used to launch the script? We need this in order to help you. Thank you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,316
closed
[t5 model parallel] misc fixes
This PR: * in 2 places fixes an assumption that devices on the device map are always ` (0, 1, 2, 3)` and: 1. are ordered by their cuda device id and not `(2, 3, 0, 1)` 2. have a stride of 1 and not `(0, 2, 3, 5) ` * adds a missing `to()`, removes a redundant `to()` * removes obvious comments * removes code that gets run twice * this PR continues at #9323 - I branched off from this PR and implemented an automatic remap of inputs and a lot refactoring. I will comment on the reasons for changes in the code. There is one gotcha wrt py36 w/o cython not having its dict ordered. Please see https://github.com/huggingface/transformers/pull/9316#discussion_r549068073 I think sorting out the logic first device/last device/is_this_the_last_layer_of_this_device and such logic should be abstracted away for readability, and not needing to replicate the same logic in each model. Perhaps `self.device_map` should be a smart class that can provide all the answers via its methods. @alexorona, I'm studying your t5-mp implementation to do the same for bart. Thank you for doing the hard work of putting the foundation in place and porting 2 models!!! Please have a look and let me know if my tweaks make sense. Your original code is excellent - I'm just trying to think how to make it easier to replicate it in other models and improve readability, hence a gazillion of questions/suggestions. Also, if you don't mind I have a few design questions: 1. Could you please comment on why you are splitting the encoder between all devices on the device map and the same for the decoder? Won't it be more efficient performance-wise to put the encoder on the first group of devices and decoder on the second? 2. I also find it confusing that the device map doesn't map out the whole model, but just the encoder and assumes that the decoder has the same config. I'm not familiar with t5 but other models definitely can have encoder and decoder that don't at all match number of layers-wise. And while this is perhaps not the case for t5, I think the device map should be intuitively similar for all models as we slowly progress with porting other models to MP. That is I think it should include all layers of the model and not half of them. @patrickvonplaten, @LysandreJik
12-27-2020 05:30:30
12-27-2020 05:30:30
Somehow I'm feeling that this approach of having special logic to remap inputs ahead of time is over-complicated. I haven't tried it yet, but won't it be much much simpler to remap inputs once the model layer is visible and just before they are used by that layer - i.e. at the point where one gets: ``` RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! ``` Then we just do `input_foo = input_foo.to(next(self.parameters()).device)` and we are done? No logic required other than just put the inputs on the same device as this layer. We might be able to even remove all those `if self.model_parallel` in most places in `forward`, and have the same code for w/ or w/o MP. Perhaps with some wrapper that will be noop when not under MP. It could also handle `None` to avoid a gazillion of `if not None` checks. I'd also make it an in-place operation, just like `nn.Module.to` does. **edit** I branched off from this PR and implemented this - works amazingly simple: https://github.com/huggingface/transformers/pull/9323<|||||>I also think #9323 is the way to go here<|||||>too long. closing.
transformers
9,315
closed
[model site] search UI: language: tags, directionality and filtering
I tried to use the models site to find which models I can use for translation of specific languages, and here are the issues I have encountered while doing that: 1. many HF created models aren't tagged for the languages they are trained for - e.g. t5: https://github.com/huggingface/transformers/issues/9314 - would it be possible to go over the HF-created main models and ensure they are clearly tagged with languages they were trained for? These get downloaded a lot, so putting a bit of metadata will go a long way of making user's life a bit easier. 2. the language tags presume bi-direction, but many models have been trained in one direction only - e.g. most wmt and most t5 models. Would it be helpful to support not just the language tags but also the directional language tags if they are one-way only? The t5 models are one direction only, so probably need one direction tags. Not sure how that would work with Language tags in search UI. Perhaps it's enough to indicate that in README, but as the number of models grows being able to quickly filter what's needed will save a lot of user's time, so perhaps planning ahead would be useful. i.e. if I need to perform a FR to EN translation, the user may benefit from getting hits for only models that can do that. 3. wrt search UI - I don't understand how a handful of special language tags is selected - there are about a dozen of language tags in the search API dropdown, Malay is there when there are hardly any models trained on that language, but Russian which is 5th on the list of number of models is not there. And those languages that are "favorite" aren't sorted... very strange. 4. search UI 2: And to get to the language one wants which is not on the favorite list - one has to solve the puzzle: * select "See All languages", which goes to https://huggingface.co/languages * hit on the list of models for that language, * then filter by the model type by typing it in and then one has arrived. Surely, there must be an easier way to select a language filter that doesn't take 3 steps which aren't obvious at all 5. Moreover if I want to select 2 languages that aren't on the favorite list, I'm out of luck, since it's not possible with the current API. It only works for the favorite list. And even if some of us know how to hack the URL and manually insert: https://huggingface.co/models?filter=ru,en - this is an OR operation, how do I do AND operation or I guess this is related to item 2 of this Issue - how do I filter by the to/from language. All of these requests/questions are nice to have and none a showstopper. Thank you! @julien-c
12-27-2020 03:24:42
12-27-2020 03:24:42
cc'ing @gary149 and @beurkinger <|||||>Hmm, I just discovered I had 2 related issues opened some months back: - https://github.com/huggingface/transformers/issues/8531 - https://github.com/huggingface/transformers/issues/7206
transformers
9,314
closed
[model site] missing language tags for t5 models
Would it be possible to update core t5-* models' cards to include what languages they were trained on? Currently it says "en", which is very lacking. e.g., see: * https://huggingface.co/t5-base * https://huggingface.co/t5-small * etc. The core t5 models should somehow have hits with https://huggingface.co/models?search=t5&filter=de, but they don't. Probably because they aren't tagged with the language tags. So they aren't found. From: https://s3.amazonaws.com/models.huggingface.co/bert/t5-small-config.json it looks like: French/German/Romanian. But also it looks to only support one direction, so probably adding the following would be clear enough to the end user: * en_to_fr * en_to_ge * en_to_ro Thank you. @patrickvonplaten
12-27-2020 02:38:53
12-27-2020 02:38:53
I updated all 5 models to include fr/ro/de language tags.
transformers
9,313
closed
[TFBart-like models] Problem with tf saving
## Context Usually, encoder-decoder models require both `input_ids` and `decoder_input_ids` in order to do one forward pass. If one *e.g.* only passes the `input_ids` to TFT5 -> the model will complain: ```python from transformers import TFT5ForConditionalGeneration import tensorflow as tf model = TFT5ForConditionalGeneration.from_pretrained("t5-small") model(input_ids=tf.convert_to_tensor([10 * [2]])) # => will result in error saying `decoder_input_ids` have to be provided which is expected and correct ``` Now TFBart is a bit special in that it automatically generates the `decoder_input_ids` if they are not passed -> so that the above example would not throw an error for TFBartForConditionalGeneration. The reason for this is this line: https://github.com/huggingface/transformers/blob/61443cd7d917ef323a799ee27bb4abc4344f0d11/src/transformers/models/bart/modeling_tf_bart.py#L1053 -> it automatically creates the `decoder_input_ids` from the `input_ids` if they are not provided. This is however more a hack than a good solution IMO. Soon we want to decouple the Bart-like models from each other and it would be good to delete this line from at least new Bart-like models. Now the problem. ## Problem: The problem is now that if we delete these lines from Bart, then the `tf.saved_model.save(model, tmpdirname)` function does not work anymore. To reproduce: Go into master and comment out this if statement in TFBart: https://github.com/huggingface/transformers/blob/61443cd7d917ef323a799ee27bb4abc4344f0d11/src/transformers/models/bart/modeling_tf_bart.py#L1053. Then run the following code: ```python from transformers import TFBartForConditionalGeneration import tempfile import tensorflow as tf model = TFBartForConditionalGeneration.from_pretrained("sshleifer/bart-tiny-random") input_ids = tf.convert_to_tensor([10 * [1]]) decoder_input_ids = tf.convert_to_tensor([10 * [8]]) inputs_dict = {"input_ids": input_ids, "decoder_input_ids": decoder_input_ids} logits = model(inputs_dict).logits model._saved_model_inputs_spec = None model._set_save_spec(inputs_dict) with tempfile.TemporaryDirectory() as tmpdirname: tf.saved_model.save(model, tmpdirname) model = tf.keras.models.load_model(tmpdirname) logits_2 = model(inputs_dict)["logits"] ``` => the code will throw an error, but it should not! It seems like there is a weird naming mismatch between `input_ids` of `TFBartDecoder` and the `decoder_input_ids` in `TFBartModel`...@jplu I'd be thrilled if you could take a look at this and see how it can be solved.
12-26-2020 23:35:59
12-26-2020 23:35:59
This bug forced me to disable the corresponding test in the new `TFLed` model for now, see: https://github.com/huggingface/transformers/pull/9278/files#r549042909<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,312
closed
RAG model implementation seems different from the paper
Hi folks, Thanks for open-sourcing RAG! After reading the model description in the paper and the actual code, I noticed a few discrepancies: 1. The marginalization for `RagSequenceForGeneration` seems a bit strange. From line 998 to line 1001 ([link](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/modeling_rag.py#L998)), only the second tokens in `seq_logprobs` are getting scored by `doc_logprobs`: ``` # RAG-sequence marginalization first_token_scores = seq_logprobs[:, :, :1, :] second_token_scores = seq_logprobs[:, :, 1:2, :] remainder = seq_logprobs[:, :, 2:, :] rag_logprobs = torch.cat([first_token_scores, second_token_scores + doc_logprobs, remainder], dim=2) ``` I wonder if this is intended? I couldn't find this mentioned in the paper. 2. The marginalization for `RagTokenForGeneration` seems more like the Rag-sequence model in the paper, because the doc_scores are the same for all tokens in the same sequence. Is this correct? ``` # RAG-token marginalization seq_logprobs = torch.nn.functional.log_softmax(seq_logits, dim=-1).view( seq_logits.shape[0] // n_docs, n_docs, -1, seq_logits.size(-1) ) doc_logprobs = torch.log_softmax(doc_scores, dim=1) log_prob_sum = seq_logprobs + doc_logprobs.unsqueeze(-1).unsqueeze(-1) return torch.logsumexp(log_prob_sum, dim=1) ``` Thanks in advance!
12-26-2020 22:36:40
12-26-2020 22:36:40
Hey @XinyuHua, regarding 1) I'm not really sure either. Maybe the author can give a better answer here (hope it's ok to ping you here @ola13) For 2) Yes it should be correct. You can see from the paper that at every generation step `i` the marginal probability over all tokens `z` is calculated (compare to the equation of RAG-Token Model in 2.1). In the equation, we sum over all `z` which corresponds to summing overall `doc_logprobs` above -> so this looks correct to me. Just the fact that the `marginalize` function (your code above) is executed at every forward pass shows that this has to correspond to `RagToken`.<|||||>Thanks for the explanation! > the marginal probability over all tokens `z` is calculated `z` is the document instead of token, right? And the paper says "we can draw a different latent document for each target token and marginalize accordingly", so `p(z|x)` should be a different score for different `y_i`, but the code looks like `p(z|x)` is unchanged for all `y_i`s, whose log form is `doc_logprobs`. So effectively this is how `RagSequence` model is framed. Does that mean the `RagToken` model is actually trained the same way as `RagSequence`, but just the generation is different? Another related question, I checked the pre-trained `RagConfig` for `RagToken`, and the `do_marginalize` is actually set to False, so the marginalize method is never called during forward?<|||||>`z` is the tensor of containing the logprob of all docs -> this is why it's called `doc_logprobs`. If you check out the dimensions of this tensor you should see that one dimension exactly corresponds to `n_docs`. `do_marginalize` is called at every forward pass because it's set to `True` in the function argument here: https://github.com/huggingface/transformers/blob/8e74eca7f2b3235f8d5340d66361ea656c67bac7/src/transformers/models/rag/modeling_rag.py#L1099<|||||>Hi @XinyuHua, thanks for the questions! Regarding your first point: > The marginalization for RagSequenceForGeneration seems a bit strange. From line 998 to line 1001 (link), only the second tokens in seq_logprobs are getting scored by doc_logprobs Since we're operating in the log-space, multiplications from the formulas in section 2.1. of the paper become additions. So instead of multiplying the `p(z|x) * p(y| x,z)`, we sum: `log p(z|x) + log p(y | z,x)` (or `doc_logprobs` + `seq_logprobs` using our variables names from the code), where x is the input sequence, y is the output sequence and z is the retrieved document. Note that in the logspace, `log p(y | z,x)` decomposes into the sum of logprobs of each token in y. In the part of the code you linked we perform this summation - we only want to add `doc_logprobs` once per sequence - we don't need to add it to each token of the sequence. Now the reason we add `doc_logprobs` to the second token is that we want to avoid adding it to the BOS token, in case the target sequence doesn't contain one or in case the `exclude_bos_score` argument is used - otherwise we would effectively do no marginalization at all in these cases. I hope this helps, but let us know if anything's still unclear!<|||||>Hi @ola13 and @patrickvonplaten , thanks for the detailed explanations!
transformers
9,311
closed
T5-base goes out of memory on 4 GPUs with as small batch size as 4
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: LINUX - Python version: 3.7 - PyTorch version (GPU?): 1.7 - Tensorflow version (GPU?): - - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> Trainer: @sgugger T5: @patrickvonplaten examples/seq2seq: @patil-suraj ## Information Model I am using T5-base with batch size of 8 and on 4 GPUs, I am always getting out of memory even with small batch sizes, This looks like a bug as this model is not really big. I am under time pressure. Is there anyone who could help me with this bug? thanks The tasks I am working on is: * GLUE benchmark <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> <!-- A clear and concise description of what you would expect to happen. --> ## Error Stack ``` 0%| | 0/148395 [00:00<?, ?it/s]Traceback (most recent call last): File "finetune_trainer.py", line 303, in <module> main() File "finetune_trainer.py", line 239, in main training_args.optimize_from_scratch) else None File "/julia/codes/trainers/trainer.py", line 804, in train self.optimizer.step() File "/opt/conda/envs/t5/lib/python3.7/site-packages/torch-1.7.1-py3.7-linux-x86_64.egg/torch/optim/lr_scheduler.py", line 67, in wrapper return wrapped(*args, **kwargs) File "/opt/conda/envs/t5/lib/python3.7/site-packages/transformers-3.5.1-py3.7.egg/transformers/optimization.py", line 285, in step state["exp_avg_sq"] = torch.zeros_like(p.data) RuntimeError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 2; 15.78 GiB total capacity; 14.10 GiB already allocated; 20.25 MiB free; 14.42 GiB reserved in total by PyTorch) Traceback (most recent call last): File "finetune_trainer.py", line 303, in <module> main() File "finetune_trainer.py", line 239, in main training_args.optimize_from_scratch) else None File "/julia/codes/trainers/trainer.py", line 804, in train self.optimizer.step() File "/opt/conda/envs/t5/lib/python3.7/site-packages/torch-1.7.1-py3.7-linux-x86_64.egg/torch/optim/lr_scheduler.py", line 67, in wrapper return wrapped(*args, **kwargs) File "/opt/conda/envs/t5/lib/python3.7/site-packages/transformers-3.5.1-py3.7.egg/transformers/optimization.py", line 296, in step denom = exp_avg_sq.sqrt().add_(group["eps"]) RuntimeError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 0; 15.78 GiB total capacity; 14.06 GiB already allocated; 4.25 MiB free; 14.44 GiB reserved in total by PyTorch) Traceback (most recent call last): File "finetune_trainer.py", line 303, in <module> main() File "finetune_trainer.py", line 239, in main Traceback (most recent call last): File "finetune_trainer.py", line 303, in <module> training_args.optimize_from_scratch) else None File "/julia/codes/trainers/trainer.py", line 804, in train main() File "finetune_trainer.py", line 239, in main self.optimizer.step() File "/opt/conda/envs/t5/lib/python3.7/site-packages/torch-1.7.1-py3.7-linux-x86_64.egg/torch/optim/lr_scheduler.py", line 67, in wrapper training_args.optimize_from_scratch) else Nonereturn wrapped(*args, **kwargs) File "/julia/codes/trainers/trainer.py", line 804, in train File "/opt/conda/envs/t5/lib/python3.7/site-packages/transformers-3.5.1-py3.7.egg/transformers/optimization.py", line 296, in step denom = exp_avg_sq.sqrt().add_(group["eps"]) RuntimeError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 1; 15.78 GiB total capacity; 14.13 GiB already allocated; 10.25 MiB free; 14.46 GiB reserved in total by PyTorch) self.optimizer.step() File "/opt/conda/envs/t5/lib/python3.7/site-packages/torch-1.7.1-py3.7-linux-x86_64.egg/torch/optim/lr_scheduler.py", line 67, in wrapper return wrapped(*args, **kwargs) File "/opt/conda/envs/t5/lib/python3.7/site-packages/transformers-3.5.1-py3.7.egg/transformers/optimization.py", line 285, in step state["exp_avg_sq"] = torch.zeros_like(p.data) RuntimeError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 3; 15.78 GiB total capacity; 14.10 GiB already allocated; 26.25 MiB free; 14.44 GiB reserved in total by PyTorch) 0%| | 0/148395 [00:00<?, ?it/s] Traceback (most recent call last): File "/opt/conda/envs/t5/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/envs/t5/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/envs/t5/lib/python3.7/site-packages/torch-1.7.1-py3.7-linux-x86_64.egg/torch/distributed/launch.py", line 260, in <module> main() File "/opt/conda/envs/t5/lib/python3.7/site-packages/torch-1.7.1-py3.7-linux-x86_64.egg/torch/distributed/launch.py", line 256, in main cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/envs/t5/bin/python', '-u', 'finetune_trainer.py', '--local_rank=3', 'configs/glue.json']' returned non-zero exit status 1. ```
12-26-2020 17:04:27
12-26-2020 17:04:27
Here are things you may try (they are unrelated to each other, so you can try in any order that resonates): 1. turn off `--fp16` or keep it but switch to [pytorch-nightly](https://pytorch.org/get-started/locally/) - there was a large memory leak fixed a few weeks ago related to autocast (fp16) - if this is not related to `autocast`/ftp16 this won't help then. `--fp16` was triggering the leak. Switching to apex amp is another option to try if you're hitting this memory leak in pytorch. 2. If you are using huggingface trainer (I assume `finetune_trainer.py` is from examples/seq2seq then you're good) and if you can use `transformers` master, I'd suggest using the just added `--sharded_ddp` option. In my few experiments I was able to get 2-3 times bigger batches. It's documented in this PR https://github.com/huggingface/transformers/pull/9208 (we are just waiting for a new fairscale release to merge it). But you can just use it w/o needing to understand if you are short on time. So if you want to try it, install both transformers and [fairscale](https://github.com/facebookresearch/fairscale/) from master and then that new option will be available. And please edit your Issue to show the command line you use, so we can see what cl args and/or hyper parameters you're using.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.