repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
βŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
4,599
closed
ImportError: cannot import name 'AutoModelForQuestionAnswering' from 'transformers
Hi friends, I would like to use transformers library. But while import I received this error. This code: ``` from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline import torch # LOAD MODEL tokenizer = AutoTokenizer.from_pretrained("savasy/bert-base-turkish-squad") model = AutoModelForQuestionAnswering.from_pretrained("savasy/bert-base-turkish-squad") ``` **Error** **ImportError: cannot import name 'AutoModelForQuestionAnswering' from 'transformers' (C:\Users\oguzk\anaconda3\lib\site-packages\transformers\__init__.py)**
05-26-2020 12:20:32
05-26-2020 12:20:32
What is your `transformers` version? Do you have PyTorch installed?<|||||>Thank you. I solved.<|||||>Great to hear!<|||||>> Thank you. I solved. How did you solve it?<|||||>I installed pytorch library.
transformers
4,598
closed
[Reformer] automate axial_pos_shape
This PR automates the calculation of the axial_pos_shape. checked that every combination of 2**n works using this sheet: https://docs.google.com/spreadsheets/d/19gnP1ve2fT2F59LNiky44SPpmtmOegfIlk5_3OFXjyU/edit?usp=sharing @patrickvonplaten
05-26-2020 12:11:19
05-26-2020 12:11:19
Hi @flozi00, Thanks for the PR! To be honest I don't think we should merge this. A couple of reasons: 1. In contrast to `num_buckets` which is said in the paper to be always around ~ `2 * sequence_length / chunk_length` `axial_pos_shape` can be freely set by the user. 2. We are trying to add as little automatic settings that are not visible to the user as possible (also @thomwolf here). The reason is that it can later lead to errors that are hard to understand for the user. In this case, I don't think the user should use AxialPositionEmbeddings before having read the docs and understood how it works. Automatically setting the `num_buckets` is already suboptimal in this sense. <|||||>BTW, I just answered your email - sorry I forgot about this
transformers
4,597
closed
[Draft] Bharaax outputattentions
05-26-2020 12:02:16
05-26-2020 12:02:16
transformers
4,596
closed
AttributeError: 'Namespace' object has no attribute 'to_json_string'
trainer.train() don't know how to set parameters self.args.to_json_string()
05-26-2020 11:18:23
05-26-2020 11:18:23
I'm not sure I understand what you're trying to do. Do you mind explaining? Showing the code you're using would help as well.<|||||>``` training_args = dict( num_cores= 8, model_name_or_path= 't5-base', max_len= 512 ,target_max_len= 2, output_dir= './models', overwrite_output_dir= True, per_gpu_train_batch_size= 8, per_gpu_eval_batch_size= 8, gradient_accumulation_steps= 4, learning_rate= 1e-4, tpu_num_cores= 8, logging_dir='/log', do_train= True, weight_decay=0.00, device='xla',local_rank=-1, max_steps=10000, adam_epsilon=1e-8, warmup_steps=0, train_batch_size=8, eval_batch_size=8, num_train_epochs=1, early_stop_callback=False, fp_16=False, # if you want to enable 16-bit training then install apex and set this to true opt_level='O1', # you can find out more on optimisation levels here https://nvidia.github.io/apex/amp.html#opt-levels-and-properties max_grad_norm=1.0, # if you enable 16-bit training then set this to a sensible value, 0.5 is a good default seed=42, fp16=False, n_gpu=0,SummaryWriter=None) from transformers import Trainer trainer = Trainer( model=model, args=argparse.Namespace(**training_args), train_dataset=train_dataset, data_collator=T2TDataCollator(), prediction_loss_only=True, tb_writer=None ) trainer.train() ````<|||||>here is my code and i got AttributeError: 'Namespace' object has no attribute 'to_json_string'<|||||>Trainer's args should be a `TrainingArguments` instance, not a dict or a Namespace. Try: ```python from transformers import TrainingArguments ```
transformers
4,595
closed
KeyError when loading a trained EncoderDecoder model
# πŸ› Bug ## Information Error when loading a trained EncoderDecoder model. When loading the config the in `configuration_auto.py` the `model_type` is expected on the form `encoder-decoder` but in `configuration_encoder_decoder.py` `model_type` is on the form `encoder_decoder` which raises a KeyError. The hyphen version seems to be convention in the other model configuration files. I guess this is something for @patrickvonplaten
05-26-2020 11:10:18
05-26-2020 11:10:18
Hi @gustavscholin, Thanks for you issue! Could you please provide a code example so that I can reproduce the error? <|||||>Hi, @patrickvonplaten @gustavscholin For me, setting "base_model_prefix" in modeling_encoder_decoder.py fixed this problem, as finding params is based on self.base_model_prefix. Is it fundamental solution? or just short-sighted? <|||||>@patrickvonplaten, here's a colab notebook to reproduce the error: https://colab.research.google.com/drive/102U7pJJcyw__Yq0PERxAKvKPSx3bvNSi?usp=sharing<|||||>> Hi, @patrickvonplaten @gustavscholin > For me, setting "base_model_prefix" in modeling_encoder_decoder.py fixed this problem, as finding params is based on self.base_model_prefix. > > Is it fundamental solution? or just short-sighted? No that was the right solution :-) I did exactly the same in this fix: #4680<|||||>> @patrickvonplaten, here's a colab notebook to reproduce the error: > > https://colab.research.google.com/drive/102U7pJJcyw__Yq0PERxAKvKPSx3bvNSi?usp=sharing A saved encoder-decoder model will always only be saved in a single folder. A single folder can always be loaded with `.from_pretrained()`. So to make your notebook work, you simply have to replace this line: ```python saved_model = EncoderDecoderModel.from_encoder_decoder_pretrained('test_run', 'test_run') ``` by ```python saved_model = EncoderDecoderModel.from_pretrained('test_run') ```
transformers
4,594
closed
KeyError: "Unable to open object (object 'bias:0' doesn't exist)"
hi, i have create a new class named ClsNerModel inherited from TFBertPreTrainedModel i have trained it successfully and save the model using model.save_pretrained to some dir however, when i loaded from that dir using ClsNerModel.from_pretrained, it fails and report some error below. sorry to report this, and any advice is appreciated. File "/home/yuanhao/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 410, in from_pretrained model.load_weights(resolved_archive_file, by_name=True) File "/home/yuanhao/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 250, in load_weights return super(Model, self).load_weights(filepath, by_name, skip_mismatch) File "/home/yuanhao/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py", line 1264, in load_weights f, self.layers, skip_mismatch=skip_mismatch) File "/home/yuanhao/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 753, in load_weights_from_hdf5_group_by_name weight_values = [np.asarray(g[weight_name]) for weight_name in weight_names] File "/home/yuanhao/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 753, in <listcomp> weight_values = [np.asarray(g[weight_name]) for weight_name in weight_names] File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "/home/yuanhao/anaconda3/envs/tf2/lib/python3.7/site-packages/h5py/_hl/group.py", line 264, in __getitem__ oid = h5o.open(self.id, self._e(name), lapl=self._lapl) File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5o.pyx", line 190, in h5py.h5o.open KeyError: "Unable to open object (object 'bias:0' doesn't exist)" my code is something like this: ############################################################################## class ClsNerModel(TFBertPreTrainedModel): def __init__(self, config, *inputs, cls_num_labels:int=2, **kwargs): super().__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.cls_num_labels = cls_num_labels self.bert = TFBertMainLayer(config, name="bert") self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) self.classifier = tf.keras.layers.Dense( config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" ) self.cls_dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) self.cls_classifier = tf.keras.layers.Dense( self.cls_num_labels, kernel_initializer=get_initializer(config.initializer_range), name="cls_classifier" ) def call(self, inputs, **kwargs): outputs = self.bert(inputs, **kwargs) sequence_output = outputs[0] # (b, t, d) pool_output = outputs[1] # (b, d) only for cls token sequence_output = self.dropout(sequence_output, training=kwargs.get("training", False)) token_logits = self.classifier(sequence_output) pool_output = self.cls_dropout(pool_output, training=kwargs.get("training", False)) cls_logits = self.cls_classifier(pool_output) outputs = (token_logits, cls_logits) + outputs[2:] # add hidden states and attention if they are here return outputs # scores, (hidden_states), (attentions) ############################################################################## config = AutoConfig.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, num_labels=num_labels, id2label=label_map, label2id={label: i for i, label in enumerate(label_map)}, cache_dir=model_args.cache_dir, ) model = ClsNerModel.from_pretrained( model_path, from_pt=bool(".bin" in model_args.model_name_or_path), output_loading_info=True, config=config, cls_num_labels=cls_num_labels, cache_dir=model_args.cache_dir, )
05-26-2020 10:25:18
05-26-2020 10:25:18
hi, i tried again, and write the class of ClsNerModel to model_tf_bert.py, and then reinstall transformers again. it works!! so, if i create my class outside the lib of transformers, what else should i do to make it work, and avoid the mistakes above thanks a lot!
transformers
4,593
closed
[Longformer For Question Answering] Conversion script, doc, small fixes
This PR adds - Longformer For Question Answering doc - Adds the link to the (official) uploaded model: https://huggingface.co/allenai/longformer-large-4096-finetuned-triviaqa - Some minor refactoring - Conversion script @ibeltagy @patil-suraj
05-26-2020 09:29:21
05-26-2020 09:29:21
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=h1) Report > Merging [#4593](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b86e42e0ac1b59f21f0eccf351d3346bbe3ed4eb&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4593/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4593 +/- ## ======================================= Coverage 78.09% 78.09% ======================================= Files 123 123 Lines 20624 20625 +1 ======================================= + Hits 16106 16108 +2 + Misses 4518 4517 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `97.41% <100.00%> (+<0.01%)` | :arrow_up: | | [src/transformers/tokenization\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbG9uZ2Zvcm1lci5weQ==) | `100.00% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=footer). Last update [b86e42e...d8d4187](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,592
closed
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
# πŸ› Bug I am running Huggingface's nlp pipeline. The code is below: ``` nlp = pipeline("question-answering", model = 'distilbert-base-cased-distilled-squad', tokenizer='distilbert-base-cased-distilled-squad') ``` Model I am using is pipeline. I try run an example using docker and I get the following error: ``` convert squad examples to features: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 360.86it/s] add example index and unique id: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 3120.76it/s] myimage_1 | [2020-05-26 08:38:22,060] ERROR in app: Exception on /deep/search [POST] myimage_1 | Traceback (most recent call last): myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app myimage_1 | response = self.full_dispatch_request() myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request myimage_1 | rv = self.handle_user_exception(e) myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception myimage_1 | reraise(exc_type, exc_value, tb) myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise myimage_1 | raise value myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request myimage_1 | rv = self.dispatch_request() myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request myimage_1 | return self.view_functions[rule.endpoint](**req.view_args) myimage_1 | File "main.py", line 50, in launch_app myimage_1 | answers = nlp(question=ques0, context=abstract, topk = 10) myimage_1 | File "/usr/local/lib/python3.7/site-packages/transformers/pipelines.py", line 1010, in __call__ myimage_1 | start, end = self.model(**fw_args) myimage_1 | File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ myimage_1 | result = self.forward(*input, **kwargs) myimage_1 | File "/usr/local/lib/python3.7/site-packages/transformers/modeling_distilbert.py", line 720, in forward myimage_1 | input_ids=input_ids, attention_mask=attention_mask, head_mask=head_mask, inputs_embeds=inputs_embeds myimage_1 | File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ myimage_1 | result = self.forward(*input, **kwargs) myimage_1 | File "/usr/local/lib/python3.7/site-packages/transformers/modeling_distilbert.py", line 482, in forward myimage_1 | inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim) myimage_1 | File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ myimage_1 | result = self.forward(*input, **kwargs) myimage_1 | File "/usr/local/lib/python3.7/site-packages/transformers/modeling_distilbert.py", line 86, in forward myimage_1 | seq_length = input_ids.size(1) myimage_1 | IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) ``` My python version in 3.7.4. Please help in fixing this.
05-26-2020 09:11:41
05-26-2020 09:11:41
Hi, what's your transformers and pytorch version? Is that the entire code you're using? Your code doesn't crash here but crashes at the line ```py answers = nlp(question=ques0, context=abstract, topk = 10) ``` Do you mind providing the question and the context you're using?<|||||>Sorry, its a mistake on my part. Thanks for your reply.
transformers
4,591
closed
Create README.md
05-26-2020 08:25:18
05-26-2020 08:25:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=h1) Report > Merging [#4591](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b86e42e0ac1b59f21f0eccf351d3346bbe3ed4eb&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4591/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4591 +/- ## ======================================= Coverage 78.09% 78.09% ======================================= Files 123 123 Lines 20624 20624 ======================================= + Hits 16106 16107 +1 + Misses 4518 4517 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4591/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=footer). Last update [b86e42e...1e39227](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,590
closed
[Model hub web parsing MD code error]
# πŸ› Bug ## Information Hi guys! If you navigate to https://github.com/huggingface/transformers/blob/master/model_cards/mrm8488/bert-italian-finedtuned-squadv1-it-alfa/README.md You will see the emojis without problem. But if you go to its HTML page on the model hub: https://huggingface.co/mrm8488/bert-italian-finedtuned-squadv1-it-alfa Emojis are not shown
05-26-2020 07:45:44
05-26-2020 07:45:44
Yes, that is a GitHub-only (GFM) feature, while we use marked.js (https://marked.js.org/) for markdown parsing. You'll have to use the actual emojis for now
transformers
4,589
closed
[LongformerForQuestionAnswering] fix qa example in docstring
@patrickvonplaten
05-26-2020 07:45:10
05-26-2020 07:45:10
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=h1) Report > Merging [#4589](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b86e42e0ac1b59f21f0eccf351d3346bbe3ed4eb&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4589/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4589 +/- ## ======================================= Coverage 78.09% 78.09% ======================================= Files 123 123 Lines 20624 20624 ======================================= Hits 16106 16106 Misses 4518 4518 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4589/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `97.40% <ΓΈ> (ΓΈ)` | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=footer). Last update [b86e42e...368ce8e](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for the PR! I actually just uploaded a pretrained question answering model from allen ai and changed the docs accordingly. So I think we don't need this PR anymore ;-). See #4593
transformers
4,588
closed
Help Wanted: Predict Next Two Tokens
Is it possible to change this in order to predct the next two tokens? ``` import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel import torch.nn.functional as F import re def grow_branches(sentence_so_far, probs, input_probability,past, h): #recursive function to find all sentence completions global branch_list global leaf_list global complete_list global model sorted_probability_list = sorted(enumerate(probs), key=lambda x: x[1], reverse=True) has_children = False for (this_token,this_probability) in sorted_probability_list: next_probability = this_probability * input_probability out_sentence = sentence_so_far.copy() sentence_and_probability = (out_sentence, input_probability) pattern = ' [A-Z]{1,1}' pattern2 = '[A-Z]{1,1}' test_string = tokenizer.decode(out_sentence[-1]) result = re.match(pattern, test_string) or re.match(pattern2, test_string) if not (result or (out_sentence[-1] in {1583,1770,6997,19090,9074,7504})) and (this_token == 13): #if the next token is going to be a period, then no need to carry out that step. #except allow Mr., Dr., Mrs., Ms., Lt., Sgt., Jr. or single initials. sentence_and_probability = (out_sentence, next_probability) complete_list.append(sentence_and_probability) return if next_probability < h: if has_children == True: branch_list.append(sentence_and_probability) else: leaf_list.append(sentence_and_probability) return else: has_children = True next_sentence = sentence_so_far.copy() next_sentence.append(this_token) (next_probability_list,next_past) = expand_node(next_sentence,past) grow_branches(next_sentence,next_probability_list, next_probability, next_past, h) def expand_node(sentence, past): #finds probabilities for the next token using gpt-2 global model if past == None: input_ids = torch.tensor(sentence).unsqueeze(0) else: input_ids = torch.tensor([sentence[-1]]).unsqueeze(0) inputs = {'input_ids': input_ids} with torch.no_grad(): logits, past = model(**inputs, past=past) logits = logits[:, -1, :] probs = F.softmax(logits, dim=-1).tolist()[0] return (probs, past) # globals here tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') leaf_list = [] branch_list = [] complete_list = [] probability_threshhold=float(input("probability cutoff (e.g. .001 or less):")) raw_prompt = input("partial sentence to complete:") prompt=tokenizer.encode(raw_prompt) (probs, past) = expand_node(prompt, None) grow_branches(prompt,probs,1,past,probability_threshhold) sorted_complete_list = sorted(complete_list, reverse=True,key=lambda x: x[1]) sorted_leaf_list = sorted(leaf_list, reverse=True,key=lambda x: x[1]) sorted_branch_list = sorted(branch_list, reverse=True,key=lambda x: x[1]) # to get the most probable completed sentence: #tokenizer.decode(sorted_complete_list[0]) #print just the completions for (sentence, prob) in sorted_complete_list: #print(round(prob,6),end=':') if prob>probability_threshhold - 1: print(repr(tokenizer.decode(sentence[len(prompt):])).strip("'"),end='|') else: print(repr(tokenizer.decode(sentence[len(prompt):])).strip("'"),end='\\') for (sentence, prob) in sorted_leaf_list: if prob>probability_threshhold: print(repr(tokenizer.decode(sentence[len(prompt):])).strip("'"),end='|') else: print(repr(tokenizer.decode(sentence[len(prompt):])).strip("'"),end='\\') ```
05-26-2020 03:05:28
05-26-2020 03:05:28
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,587
closed
ensure_ascii=False
05-26-2020 02:17:28
05-26-2020 02:17:28
transformers
4,586
closed
T5Model in fp16 still yield nan with more complex examples
# πŸ› Bug Hello, thank you for the recent [PR](https://github.com/huggingface/transformers/pull/4436) with fp16 fixes. It seems to work well with short inputs, but once the model is fed with some more complex data it still yields nans. ## Information Model I am using: T5 Language I am using the model on: English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Run the code: ``` from transformers import T5Model import torch model = T5Model.from_pretrained("t5-base").cuda().half().eval() inputs = torch.tensor([[37,423,215,1504,13,8,1186,10670,11,10449,49,1152,11363,15465,1514,5,4433,399,7863,24766,15,17,965,594,5386,14286,28,8,6,5,755,5781,32099,993,3744,21,8,2367,18,458,53,16616,32098,16,32097,7660,16409,77,19,3,107,13164,1054,32096,993,1970,9368,948,147,8,15465,5861,87,25481,788,12,8,32095,1300,61,37,423,215,1504,13,3,24151,40,3,19668,594,5386,14286,28,8,3,115,13164]]).cuda() decoder_input_ids = torch.tensor([[21820, 296, 55]]).cuda() out = model(input_ids=inputs, decoder_input_ids=decoder_input_ids) # encoder outputs out[2][:,:2] ``` output: ``` tensor([[[nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan, ..., nan, nan, nan]]], device='cuda:0', dtype=torch.float16, grad_fn=<SliceBackward>) ``` ## Expected behavior Output with non-nan values. ## Environment info - `transformers` version: 2.10.0 - Platform: Linux-4.15.0-88-generic-x86_64-with-debian-buster-sid - Python version: 3.6.10 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
05-25-2020 22:31:43
05-25-2020 22:31:43
I got the same issue - seems to happen with the larger models (t5 small is fine)<|||||>I can reproduce the error - will investigate :-) <|||||>Okey this took me quite some time to figure out... So what happens is the following. When setting **all** modules in half as is done in the code snippet above, the following happens. At some point in line: https://github.com/huggingface/transformers/blob/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6/src/transformers/modeling_t5.py#L188 the tensor `layer_output` contains `inf` values and then later in: https://github.com/huggingface/transformers/blob/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6/src/transformers/modeling_t5.py#L156 `nan` values enter the game... I don't really think this is a bug in T5, but it's just due to T5's rather unstable architecture. `model.half()` essentially corresponds to an apex level O3: https://nvidia.github.io/apex/amp.html#o3-fp16-training which in itself tends to become unstable... So using your code above and using the `apex` package instead of calling `half()` on the model, you can notice the following. The code snippet which is essentially the same as yours: ```python from transformers import T5Model from apex import amp import torch model = T5Model.from_pretrained("t5-base").cuda().eval() model = amp.initialize(model, opt_level="O3") inputs = torch.tensor([[37,423,215,1504,13,8,1186,10670,11,10449,49,1152,11363,15465,1514,5,4433,399,7863,24766,15,17,965,594,5386,14286,28,8,6,5,755,5781,32099,993,3744,21,8,2367,18,458,53,16616,32098,16,32097,7660,16409,77,19,3,107,13164,1054,32096,993,1970,9368,948,147,8,15465,5861,87,25481,788,12,8,32095,1300,61,37,423,215,1504,13,3,24151,40,3,19668,594,5386,14286,28,8,3,115,13164]]).cuda() decoder_input_ids = torch.tensor([[21820, 296, 55]]).cuda() out = model(input_ids=inputs, decoder_input_ids=decoder_input_ids) # encoder outputs out[2][:,:2] # nan output ``` yields the same output consisting of `nan` values. The same happens for `opt_level` O2. Using the recommended O1 level of optimization: ```python from transformers import T5Model from apex import amp import torch model = T5Model.from_pretrained("t5-base").cuda().eval() model = amp.initialize(model, opt_level="O1") inputs = torch.tensor([[37,423,215,1504,13,8,1186,10670,11,10449,49,1152,11363,15465,1514,5,4433,399,7863,24766,15,17,965,594,5386,14286,28,8,6,5,755,5781,32099,993,3744,21,8,2367,18,458,53,16616,32098,16,32097,7660,16409,77,19,3,107,13164,1054,32096,993,1970,9368,948,147,8,15465,5861,87,25481,788,12,8,32095,1300,61,37,423,215,1504,13,3,24151,40,3,19668,594,5386,14286,28,8,3,115,13164]]).cuda() decoder_input_ids = torch.tensor([[21820, 296, 55]]).cuda() out = model(input_ids=inputs, decoder_input_ids=decoder_input_ids) # encoder outputs out[2][:,:2] # valid output ``` however does not produce any `nan` values. As far as I know O1 is also the recommended setting: https://nvidia.github.io/apex/amp.html#o1-mixed-precision-recommended-for-typical-use . As far as I know O1 can already greatly speed up your calculations and save quite some memory, so that I would recommend going for this. Also pinging @mfuntowicz, @julien-c and @LysandreJik for verification<|||||>@patrickvonplaten Even with O1 I tried fine-tuning T5-base, and in less than 100 iterations, it will converge to nan values quickly. Seems like the stability of this model is poor. Perhaps first few iterations of fine-tuning require FP32.<|||||>~I am having issues even in fp32 with everything besides t5-small.~ I am having issues in `O1` with t5-large and t5-base. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Having the same issue with loss going to `nan` when fine-tuning tf-base with fp16. tf-small works fine though.<|||||>Ran into this issue and found a workaround to get FP16 training working. T5DenseGatedGeluDense doesn't play nice with FP16, specifically the final dense layer to resize from d_ff to d_model. I used pytorch's autocast/gradscaler mixed precision implementation and created an exception for that specific dense layer. ``` class T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN["gelu_new"] def forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) with autocast(enabled=False): hidden_states = self.wo(hidden_states) return hidden_states ```<|||||>@leecming Have you also tried the fix with `T5DenseReluDense`?<|||||>Great qusetion @j-min - I actually didn't find the time yet to test the "new" t5 model with fp16. It might very well be that the following models work fine with fp16: https://huggingface.co/models?search=mt5 and https://huggingface.co/models?search=t5-v1<|||||>@patrickvonplaten @leecming I'm trying the fix as below. ```python3 class T5DenseReluDense(nn.Module): def __init__(self, config): super().__init__() self.wi = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) def forward(self, hidden_states): hidden_states = self.wi(hidden_states) hidden_states = F.relu(hidden_states) hidden_states = self.dropout(hidden_states) with autocast(enabled=False): hidden_states = self.wo(hidden_states) return hidden_states class T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN["gelu_new"] def forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) with autocast(enabled=False): hidden_states = self.wo(hidden_states) return hidden_states ``` Btw it results in the error `expected scalar type Half but found Float`, since `hidden_states` parameters are float while self.wo parameters are half. Could you please guide how I bypass the error? ```python3 import torch from torch.cuda.amp import autocast from transformers import T5Model model = T5Model.from_pretrained("t5-base").cuda().eval() inputs = torch.tensor([[37,423,215,1504,13,8,1186,10670,11,10449,49,1152,11363,15465,1514,5,4433,399,7863,24766,15,17,965,594,5386,14286,28,8,6,5,755,5781,32099,993,3744,21,8,2367,18,458,53,16616,32098,16,32097,7660,16409,77,19,3,107,13164,1054,32096,993,1970,9368,948,147,8,15465,5861,87,25481,788,12,8,32095,1300,61,37,423,215,1504,13,3,24151,40,3,19668,594,5386,14286,28,8,3,115,13164]]).cuda() decoder_input_ids = torch.tensor([[21820, 296, 55]]).cuda() out = model(input_ids=inputs, decoder_input_ids=decoder_input_ids) # encoder outputs out[2][:,:2] with autocast(): out = model(input_ids=inputs, decoder_input_ids=decoder_input_ids) loss = out.last_hidden_state.exp().mean() ``` <|||||>Oh adding `hidden_states = hidden_states.to(torch.float32)` worked, never mind. Is there a more concrete script to check if this fixes T5's fp16 training? @patrickvonplaten ```python3 class T5DenseReluDense(nn.Module): def __init__(self, config): super().__init__() self.wi = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) def forward(self, hidden_states): hidden_states = self.wi(hidden_states) hidden_states = F.relu(hidden_states) hidden_states = self.dropout(hidden_states) with autocast(enabled=False): hidden_states = hidden_states.to(torch.float32) hidden_states = self.wo(hidden_states) return hidden_states class T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN["gelu_new"] def forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) with autocast(enabled=False): hidden_states = hidden_states.to(torch.float32) hidden_states = self.wo(hidden_states) return hidden_states ``` ```python3 import torch from torch.cuda.amp import autocast from transformers import T5Model model = T5Model.from_pretrained("t5-base").cuda().eval() inputs = torch.tensor([[37,423,215,1504,13,8,1186,10670,11,10449,49,1152,11363,15465,1514,5,4433,399,7863,24766,15,17,965,594,5386,14286,28,8,6,5,755,5781,32099,993,3744,21,8,2367,18,458,53,16616,32098,16,32097,7660,16409,77,19,3,107,13164,1054,32096,993,1970,9368,948,147,8,15465,5861,87,25481,788,12,8,32095,1300,61,37,423,215,1504,13,3,24151,40,3,19668,594,5386,14286,28,8,3,115,13164]]).cuda() decoder_input_ids = torch.tensor([[21820, 296, 55]]).cuda() out = model(input_ids=inputs, decoder_input_ids=decoder_input_ids) # encoder outputs out[2][:,:2] with autocast(): out = model(input_ids=inputs, decoder_input_ids=decoder_input_ids) loss = out.last_hidden_state.exp().mean() print(loss) >>> tensor(1.1017, device='cuda:0', grad_fn=<MeanBackward0>) ``` <|||||>This is actually a topic I wanted to look into more closely and didn't manage to do so time-wise...maybe next week. But in short, one should try to train a whole T5 model with your suggested fix. What I would recommend doing is to take your guys' fix from above and open a PR with it. Then with this PR we should fine-tune a whole t5 model on some task, *e.g.* using the Seq2SeqTrainer. E.g. one could adapt this script:https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing and instead of using a `Bert2Bert` model one could just use a `google/t5v1_1-small` or base model and see whether there are any problem in training. also cc @patil-suraj in case he has better pointers/ideas<|||||>I'll try to do a run next week though :-) <|||||>It’s not a good fix since it relies on a specific AMP implementation (autocast) and wouldn’t work on others (e.g., Nvidia APEX). It also uses more memory than a clean AMP implementation. A cleaner quick fix would be to copy BERT’s gradient checkpointing code and train in FP32 mode with checkpointing. Also, Nvidia with the latest Ampere cards has started supporting bf16 which is good news.<|||||>I am having the same issue with mt5-small getting nan with deepspeed, I really appreciate any advice on this. I am having really a hard time with it, thanks a lot @patrickvonplaten @patil-suraj @sgugger Do you mind sharing the current state of mt5 training with fp16? thanks a lot<|||||>see: https://github.com/huggingface/transformers/issues/10830<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>anyone coming after some years, try this https://huggingface.co/google/umt5-small instead<|||||>no luck with https://huggingface.co/google/umt5-small as well even though I was training using `FP32`
transformers
4,585
closed
Introduce a new tensor type for return_tensors on tokenizer for NumPy
Two changes in this PR: - As we're introducing more than two tensor backend alternatives I created an enum `TensorType` listing all the possible tensor we can create `TensorType.TENSORFLOW`, `TensorType.PYTORCH`, `TensorType.NUMPY`. This might help newcomers who don't know about `"tf"`, `"pt"`. _->Note: TensorType are compatible with previous `"tf"`, `"pt"` and now `"np"` str to allow backward compatbility (+unittest)_ - Numpy is now a possible target when creating tensors. This is usefull for JAX :)
05-25-2020 22:22:12
05-25-2020 22:22:12
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=h1) Report > Merging [#4585](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e3e552125e86824239e445dd3c659df0aea4db9&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `94.11%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4585/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4585 +/- ## ========================================== - Coverage 78.09% 78.09% -0.01% ========================================== Files 123 123 Lines 20624 20622 -2 ========================================== - Hits 16106 16104 -2 Misses 4518 4518 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <93.93%> (+0.32%)` | :arrow_up: | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.01% <0.00%> (-0.66%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=footer). Last update [3e3e552...7f19c32](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@LysandreJik I pushed a initial version of `test_np_encode_plus_sent_to_model` which converts input to numpy tensor. For the moment we don't have any model to forward through (JAX/Flax PR is not merged). I added a note to complete the unittests when we have the full pipeline available.
transformers
4,584
closed
[ci] fix 3 remaining slow GPU failures
05-25-2020 22:11:58
05-25-2020 22:11:58
Failure is `tests/test_hf_api.py::HfApiEndpointsTest::test_presign_and_upload`, which seems unrelated, so going to merge.
transformers
4,583
closed
Provide simple way to train a new translation model from scratch
# πŸš€ Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation Huggingface just released a huge pile of pretrained translation models. I just want to train a completely custom model on a custom language pair, without pretraining etc. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
05-25-2020 21:33:51
05-25-2020 21:33:51
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>unstale<|||||>@sshleifer I was wondering whether there is any activity on this issue. I have trained some models with MarianMT, but I am really interested in training a model from scratch with the transformers library. <|||||>This isn't supported by default, but is definitely possible. Rough steps would be: 1) Make a local directory with your intialized model and tokenizer. 2) Run a command like [this](https://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/distil_marian_no_teacher.sh) where `$m` is the path your your local dir. cc @patil-suraj<|||||>@sshleifer could you please repost the command that the web-page does not exist anymore?<|||||>https://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/distil_marian_no_teacher.sh<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale<|||||>unstale
transformers
4,582
closed
Improve model card for Tereveni-AI/gpt2-124M-uk-fiction
Add language metadata, training and evaluation corpora details. Add example output. Fix inconsistent use of quotes.
05-25-2020 21:30:11
05-25-2020 21:30:11
transformers
4,581
closed
[GPT2, CTRL] Allow input of input_ids and past of variable length
## Description This PR reverts the automatic cutting of input ids as introduced in PR: https://github.com/huggingface/transformers/pull/3734 and fixes issue https://github.com/huggingface/transformers/issues/4368 . Currently, when `past` is used in combination with `input_ids`, the `input_ids` are cut to just the last token. This breaks certain functionality as explained in Issue: #4368. Also, the documentation is made more precise for GPT2 and CTRL. ## Backward Compatibility This PR slightly breaks backward compatibility since `input_ids` now have to be input according to `past` and are **not** cut automatically for example for automatic language generation. So the functionality is as it was before version 2.8.0.
05-25-2020 19:44:24
05-25-2020 19:44:24
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=h1) Report > Merging [#4581](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4c6b21805647f3a96737a50390a4c3e9463d8ef7&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4581/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4581 +/- ## ======================================= Coverage 78.09% 78.09% ======================================= Files 123 123 Lines 20617 20596 -21 ======================================= - Hits 16100 16084 -16 + Misses 4517 4512 -5 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4581/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <ΓΈ> (+0.83%)` | :arrow_up: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4581/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <ΓΈ> (+0.01%)` | :arrow_up: | | [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4581/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `98.40% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4581/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.66% <ΓΈ> (+0.22%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4581/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=footer). Last update [4c6b218...8350637](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> LGTM, awesome! > > Should we fix some examples or pipeline accordingly? The generation method works fine, since `prepare_input_ids` for GPT2 and CTRL only took the last input_ids anyways. So all methods relying on `generate()` are fine including the pipeline and `run_generation` examples => so we should be good!
transformers
4,580
closed
LongformerForSequenceClassification
This PR adds `LongformerForSequenceClassification` @patrickvonplaten @ibeltagy All the changes here are as per we discussed in `LongformerForQuestionAnswering`. `forward` method automatically sets global attention on cls token.
05-25-2020 18:22:39
05-25-2020 18:22:39
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=h1) Report > Merging [#4580](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8cc6807e8997b8b7404c07037bd02c578da98baf&el=desc) will **increase** coverage by `0.02%`. > The diff coverage is `92.85%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4580/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4580 +/- ## ========================================== + Coverage 78.03% 78.05% +0.02% ========================================== Files 124 124 Lines 20647 20688 +41 ========================================== + Hits 16111 16148 +37 - Misses 4536 4540 +4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.57% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `96.85% <92.85%> (-0.56%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=footer). Last update [8cc6807...a9afa7b](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This is great. Thanks, @patil-suraj<|||||>Is it better to use pooled output for sequence classification like in BertForSequenceClassification? @ibeltagy @patil-suraj ``` pooled_output = outputs[1] pooled_output = self.dropout(pooled_output) logits = self.classifier(pooled_output) ```<|||||>@leslyarun `LongformerClassificationHead` does the pooling <|||||>> @leslyarun > `LongformerClassificationHead` does the pooling That's great. Fine then πŸ‘ <|||||>Awesome thanks @patil-suraj! Merging<|||||>@patil-suraj Thanks for this! I'm working on a multi-task version of `LongformerForSequenceClassification`. For my context, why did you decide to implement pooling separately from the [pooling done in `LongformerModel`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/longformer/modeling_longformer.py#L1377-L1383)? It seems like the key differences between the pooling done in `LongformerClassificationHead` vs. `LongformerPooler` are: 1. a dropout layer before the dense layer ([source](https://github.com/huggingface/transformers/blob/9ade58f0555430cec851e307c83c3a56c4a77d0b/src/transformers/models/longformer/modeling_longformer.py#L2004)) 2. additional dropout and dense layers ([source](https://github.com/huggingface/transformers/blob/9ade58f0555430cec851e307c83c3a56c4a77d0b/src/transformers/models/longformer/modeling_longformer.py#L2007-L2008)) I see that this mimics the [`RobertaForSequenceClassification` implementation](https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/modeling_roberta.py#L1449-L1468). Is the goal to avoid the pooler parameters learned during pre-training a `LongformerModel`? I see that this topic has been discussed in general (https://github.com/huggingface/transformers/issues/1328), but I am curious to learn more specifically for Longformer!
transformers
4,579
closed
How to save tokenize data when training from scratch
# ❓ Questions & Help I am training Allbert from scratch following the blog post by hugging face. As it mentions that : > If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step. How this can be done any suggestions? As of now , using the method given in the notebook: ``` from transformers import TextDataset dataset = TextDataset( tokenizer=tokenizer, file_path="./oscar.eo.txt", block_size=128, ) ``` there is no method to save tokenize data, can anyone suggest how to save that as its already taking long enough before starting training.
05-25-2020 17:16:45
05-25-2020 17:16:45
There is a method to save tokenizer. Check this notebook: https://github.com/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb <|||||>Thats what I am using. its saving it in the `dataset` variable not in any file. ByTokenize data I mean pretraining data.<|||||>You can look at serialization practices, you should able to do with torch at least. https://huggingface.co/transformers/serialization.html#serialization-best-practices<|||||>well thats for all required model files. I am not getting how to save pretraining data<|||||>Once you have your data you can pickle it or use `torch.save` to save it to your disk and reload it later.<|||||>That worked @LysandreJik thanks! I still not getting how you can prepare pretraining data on the fly while training. I got large training data and don't want to wait until it gets prepared for training.<|||||>Have you taken a look at PyTorch's Dataset/Dataloader utilities? I recommend taking a look at [loading hude data functionality](https://discuss.pytorch.org/t/loading-huge-data-functionality/346) or [how to use a dataset larger than memory](https://discuss.pytorch.org/t/how-to-use-dataset-larger-than-memory/37785/8). I personnally prefer using IterableDatasets when loading large files, as I find the API easier to use to limit large memory usage. This [tutorial](https://medium.com/swlh/how-to-use-pytorch-dataloaders-to-work-with-enormously-large-text-files-bbd672e955a0) is interesting on that subject.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,578
closed
Create model card
05-25-2020 16:36:14
05-25-2020 16:36:14
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=h1) Report > Merging [#4578](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4578/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4578 +/- ## ========================================== - Coverage 77.87% 77.86% -0.01% ========================================== Files 123 123 Lines 20566 20566 ========================================== - Hits 16016 16014 -2 - Misses 4550 4552 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ΓΈ)` | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=footer). Last update [a34a989...03b9ffc](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,577
closed
Using whole word masking on training LM from scratch
# ❓ Questions & Help ## Details Hello everyone, I wanted to use _whole-word-masking_ in training LM from scratch. I could not have found how to apply this option using Trainer. I thought this option should be managed in "class DataCollatorForLanguageModeling", but I could not find options for _whole-word-masking._ Am I looking at wrong place OR it is not implemented yet? If not, is it possible to do with run_language_modeling.py? **A link to original question on Stack Overflow**: https://stackoverflow.com/questions/62061578/how-to-use-whole-word-masking-on-training-lm-from-scratch Any help is appreciated! Thanks
05-25-2020 14:31:34
05-25-2020 14:31:34
I think it's not implemented yet. @julien-c any suggestion/thoughts for pretraining with wwm?<|||||>NVIDIA/Megatron-LM does wwm on the fly in __ getitem __ We can do something similar in DataCollatorForLanguageModeling or in the dataset https://github.com/NVIDIA/Megatron-LM/blob/22c0e300670672e4e0a8604bd6ab89bc28c970a6/megatron/data/bert_dataset.py#L148<|||||>Thanks for the suggestion, I'll look into it.<|||||>@usuyama The Megatron example is for the BERT dataset which uses wordpiece tokenization. Any suggestions how to do wwm for GPT2 tokenizer?<|||||>related #6491<|||||>Check if still looking for an answer: https://github.com/huggingface/transformers/blob/07708793f20ec3a949ccab32cc4fe0c7272dcc4c/src/transformers/data/data_collator.py#L301
transformers
4,576
closed
OSError: Model name 'transfo-xl-wt103' was not found in tokenizers model name list (transfo-xl-wt103). We assumed 'transfo-xl-wt103' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.bin', 'vocab.txt'] but couldn't find such vocabulary files at this path or url.
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.10.0 - Platform: ubantu - Python version:3.6 - PyTorch version (GPU?):yes - Tensorflow version (GPU?):yes - Using GPU in script?:yes - Using distributed or parallel set-up in script?:
05-25-2020 14:22:09
05-25-2020 14:22:09
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,575
closed
Onnx notebook problem
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I saw the issue related to mine in #260 so I changed env to python 3.6 and torch 1.1 but I didn't help. When I run onnx notebook I get an error [TypeError: export() got an unexpected keyword argument 'dynamic_axes'] Does anyone have guess what's wrong? <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
05-25-2020 14:19:37
05-25-2020 14:19:37
You can try newer version of PyTorch (like 1.3~1.5). The problem shall be resolved.<|||||>Hi thanks for the reply I tried with pytorch 1.4 but I got another error on the below ![image](https://user-images.githubusercontent.com/44370759/83090959-fad51180-a0d4-11ea-8db7-53b96e20e0c3.png) do you have any idea about this one? thanks! <|||||>@amy-hyunji, this option (use_external_data_format) need PyTorch 1.5. This option is not needed for model < 2GB. If you do not want to upgrade to PyTorch 1.5. You can install transformers from source, and modify the convert_graph_to_onnx.py (by removing the parameter during calling onnx.export function).<|||||>@tianleiwu Thanks a lot :)
transformers
4,574
closed
Fix longformer attention mask type casting when using apex
Fix for issue [#4525](https://github.com/huggingface/transformers/issues/4525).
05-25-2020 12:40:22
05-25-2020 12:40:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=h1) Report > Merging [#4574](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b86e42e0ac1b59f21f0eccf351d3346bbe3ed4eb&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4574/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4574 +/- ## ========================================== - Coverage 78.09% 78.08% -0.01% ========================================== Files 123 123 Lines 20624 20624 ========================================== - Hits 16106 16105 -1 - Misses 4518 4519 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `97.40% <100.00%> (ΓΈ)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=footer). Last update [b86e42e...8528090](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,573
closed
Transformers' trainer sequence classification problem
# ❓ Transformers' trainer sequence classification problem ## Details I wanted to use `XLMRobertaForSequenceClassification` to classify a sequence into `1` or `0`. ```python MODEL_NAME = 'xlm-roberta-base' def multilingual_model(max_seq_length=SEQUENCE_LENGTH, trainable=False): """Build and return a multilingual BERT model and tokenizer.""" model = XLMRobertaForSequenceClassification.from_pretrained( MODEL_NAME, num_labels = 2, output_attentions = False, output_hidden_states = False, ) return model ``` The trainer is ```python from transformers import Trainer model = multilingual_model() trainer = Trainer( model=model, args=training_args, train_dataset=part_train_dataset, eval_dataset=part_valid_dataset, compute_metrics=compute_metrics) ``` `training_args` ```python from transformers import TrainingArguments BATCH_SIZE = 32 DEVICE = torch.device("cpu") training_args = TrainingArguments("/kaggle/working") training_args.do_train = True training_args.evaluate_during_training = True training_args.adam_epsilon = 1e-8 training_args.learning_rate = 1e-5 training_args.per_gpu_train_batch_size = BATCH_SIZE training_args.num_train_epochs=TRAIN_EPOCH ``` `compute_metrics` ```python from transformers import EvalPrediction from typing import Dict import numpy as np def compute_metrics(p: EvalPrediction) -> Dict: preds = np.argmax(p.predictions, axis=1) return metrics.roc_auc_score(preds, p.label_ids) ``` An exerpt of `part_train_dataset` ``` [InputFeatures(input_ids=[0, 99070, 1159, 11050, 8108, 398, 6244, 7, 10932, 98, 759, 4488, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], attention_mask=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], token_type_ids=None, label=1), InputFeatures(input_ids=[0, 28192, 2367, 83, 442, 22120, 2367, 83, 442, 142, 97629, 21115, 111, 3060, 102172, 20397, 761, 7, 2750, 621, 4127, 99, 163684, 214, 15970, 6, 140545, 297, 7398, 1419, 2750, 2], attention_mask=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], token_type_ids=None, label=1) ``` Similarly, one of `part_valid_dataset`: ``` [InputFeatures(input_ids=[0, 99070, 1159, 11050, 8108, 398, 6244, 7, 10932, 98, 759, 4488, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], attention_mask=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], token_type_ids=None, label=1), InputFeatures(input_ids=[0, 28192, 2367, 83, 442, 22120, 2367, 83, 442, 142, 97629, 21115, 111, 3060, 102172, 20397, 761, 7, 2750, 621, 4127, 99, 163684, 214, 15970, 6, 140545, 297, 7398, 1419, 2750, 2], attention_mask=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], token_type_ids=None, label=1), ``` When running `trainer.train()`, I received the following error: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-11-3435b262f1ae> in <module> ----> 1 trainer.train() /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path) 380 continue 381 --> 382 tr_loss += self._training_step(model, inputs, optimizer) 383 384 if (step + 1) % self.args.gradient_accumulation_steps == 0 or ( /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in _training_step(self, model, inputs, optimizer) 465 inputs[k] = v.to(self.args.device) 466 --> 467 outputs = model(**inputs) 468 loss = outputs[0] # model outputs are always tuple in transformers (see doc) 469 /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) /opt/conda/lib/python3.7/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels) 355 else: 356 loss_fct = CrossEntropyLoss() --> 357 loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 358 outputs = (loss,) + outputs 359 /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) /opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 930 def forward(self, input, target): 931 return F.cross_entropy(input, target, weight=self.weight, --> 932 ignore_index=self.ignore_index, reduction=self.reduction) 933 934 /opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2315 if size_average is not None or reduce is not None: 2316 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2317 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2318 2319 /opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 2113 .format(input.size(0), target.size(0))) 2114 if dim == 2: -> 2115 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 2116 elif dim == 4: 2117 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: expected scalar type Long but found Float ``` which does not exist if `num_labels` is 1. From `transformers`'s github, it seems that 2 labels is standard for binary classification. Beside how to fix the error, I wanted to ask why there are zeroes in `attention_mask` in `part_train`/`valid_dataset` [**Link to original question on Stack Overflow**:](https://stackoverflow.com/questions/61987904/transformers-trainer-sequence-classification-problem)
05-25-2020 12:28:05
05-25-2020 12:28:05
I ended up using the vanilla way to train<|||||>I think compute_metrics should return a dictionary string to metric values. That is how it is written in the docstring of the train function
transformers
4,572
closed
Typo in GPT2 documentation
# πŸ› Bug In the GPT2 documentation [page](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2lmheadmodel), in the parameters of both LMHead and DoubleHeads model, 'inputs_embeds' is registered as 'input_embeds' which leads to an error upon implementation.
05-25-2020 11:48:08
05-25-2020 11:48:08
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,571
closed
cannot import name 'TFElectraModel' from 'transformers'
# πŸ› Bug hi, thanks for your nice tool for NLP. However, when I use transformers on MacOs to load Electra model I get an import error > ImportError: cannot import name 'TFElectraModel' from 'transformers' How can I fix this issue?
05-25-2020 09:59:12
05-25-2020 09:59:12
Hello ! Which version of the lib do you use?<|||||>Hello! I think it's a problem with your GPU rather than from transformers. Check your TensorFlow whether "Failed to load the native TensorFlow runtime." appears once it is imported.<|||||>That's probably because you don't have `tensorflow>=2.0` installed while you're trying to load a TensorFlow model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,570
closed
Model card: Updated the link to the paper
The conference has changed the link to the paper, so I updated it.
05-25-2020 08:26:03
05-25-2020 08:26:03
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=h1) Report > Merging [#4570](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4570/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4570 +/- ## ========================================== - Coverage 77.87% 77.87% -0.01% ========================================== Files 123 123 Lines 20566 20566 ========================================== - Hits 16016 16015 -1 - Misses 4550 4551 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4570/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4570/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4570/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=footer). Last update [a34a989...78cf772](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,569
closed
bert embedding make OOM in albert
# ❓ Questions & Help ## Details hi, i made item collaborative filtering model by albert model. i set vocab size to 600k, it made OOM. (after initialized model, memory used 3 GB. but During initialization, it takes 11 GB) i tracked source code and i found this: albert model initialize albert word embedding layer after initializing bert word embedding layer. embedding layer is initialized twice. so it made OOM. - bert word embedding : vocab size x hidden layer size(4048) - albert word embedding : vocab size x embedding size(128) => bert word embedding free is there any problem if i fix AlbertEmbeddings to nn.Module? thanks ` class AlbertEmbeddings(BertEmbeddings): """ Construct the embeddings from word, position and token_type embeddings. """ def __init__(self, config): super().__init__(config) self.word_embeddings = nn.Embedding(config.vocab_size, config.embedding_size, padding_idx=config.pad_token_id) self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.embedding_size) self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.embedding_size) self.LayerNorm = torch.nn.LayerNorm(config.embedding_size, eps=config.layer_norm_eps) ` ` class BertEmbeddings(nn.Module): """Construct the embeddings from word, position and token_type embeddings. """ def __init__(self, config): super().__init__() self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load # any TensorFlow checkpoint file self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.dropout = nn.Dropout(config.hidden_dropout_prob) `
05-25-2020 07:36:15
05-25-2020 07:36:15
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,568
closed
❓ [BART] Different embedding sizes between pre-trained / fine-tuned checkpoint
# ❓ Questions & Help Running this code : ```python from transformers import BartModel x = BartModel.from_pretrained('bart-large') x2 = BartModel.from_pretrained('bart-large-cnn') print(x.shared) print(x2.shared) ``` Gives : >Embedding(50265, 1024, padding_idx=1) Embedding(50264, 1024, padding_idx=1) --- Why the vocabulary size is different ? Isn't it supposed to be the same ? Is it just from the original authors' checkpoint ? @sshleifer
05-25-2020 07:24:40
05-25-2020 07:24:40
Good catch. There is no mask token in the second checkpoint. I believe that is the same as the authors' implementation. Completely off topic: if you still have the xsum data you used I would love a copy. I'm sam [at] huggingface.co . <|||||>Thanks for your fast answer ! Do you know why there is no mask token in the second checkpoint ? And if it has any impact on score ?<|||||>I have a hunch the there is no `<mask>` token because of fairseq's `--find-unused-parameters` clarg, but I'm not certain. I would guess no impact on score because `<mask>` does not show up in the finetuning data.
transformers
4,567
closed
❓ [BART] Why using bias for LM head if not trained ?
# ❓ Questions & Help As I understood, BART is not using a regular Linear layer as LM head, but instead reuse the weights of the shared embeddings. As show here, biases are added : https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/src/transformers/modeling_bart.py#L876 But these biases are registered as buffer, not as parameter. **Since they are not trained, they will always stay 0 ?** If they stay 0, what's the point of having bias at all ? @sshleifer
05-25-2020 05:39:48
05-25-2020 05:39:48
Great Q! The point of using them is that `MarianMTModel`, which inherits from `BartForConditionalGeneration` uses them. You're correct for the bart checkpoints they stay 0. If you think there is a comment or different approach that would be clearer, I'm very open to PR/other ideas.
transformers
4,566
closed
variable name changes for Issue #4141
Hi Let me know if additional changes are required for Issue #4141. Thank you for this awesome repository.
05-25-2020 05:25:51
05-25-2020 05:25:51
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=h1) Report > Merging [#4566](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4566/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4566 +/- ## ======================================= Coverage 77.87% 77.87% ======================================= Files 123 123 Lines 20566 20569 +3 ======================================= + Hits 16016 16019 +3 Misses 4550 4550 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `77.20% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.82% <100.00%> (+<0.01%)` | :arrow_up: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.25% <100.00%> (+0.04%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.84% <100.00%> (+0.06%)` | :arrow_up: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `78.74% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.54% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `98.40% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `99.06% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.43% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.86% <100.00%> (ΓΈ)` | | | ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=footer). Last update [a34a989...e31fdfa](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi, I'm not sure we want to change this. I do agree that `adder` is more explicit than `attention_mask` and better models what it does. However, imo the API isn't limited to high-level modules but even to lower level modules such as `AlbertAttention` and `AlbertTransformer`. These modules may be used by users in their specific applications: `from transformers.modeling_albert import AlbertTransformer`. I think the gain here is not worth the loss of compatibility with all previous versions. What do you think @patrickvonplaten, @thomwolf, @julien-c ?<|||||>Yes, I agree...sorry @NSanjay I didn't think this fully through when answering here: https://github.com/huggingface/transformers/issues/4141#issuecomment-629875496 :-/ <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,565
closed
changing config.axial_pos_shape for 'ReformerModelWithLMHead' when fine-tuning
# ❓ Questions & Help I'm trying to fine-tune the Reformer for the language generation task and I padded the sequence lengths to be a multiple of least common multiple chunk_length 64, and now I'm asked to pad the sequence to 524288(512 * 1024), which will give me an out of memory error. I would like to know a workaround for this, since the error message also gives an alternative to 'pad_to_max_length', which is 'changing config.axial_pos_shape' and specially since this is known to be a memory efficient transformer. Thank you. **A link to original question on Stack Overflow**: [https://stackoverflow.com/questions/61986452/fine-tuning-reformer-gives-out-of-memory-error-when-sequence-length-is-padded-t](https://stackoverflow.com/questions/61986452/fine-tuning-reformer-gives-out-of-memory-error-when-sequence-length-is-padded-t)
05-25-2020 02:34:08
05-25-2020 02:34:08
I would not recommend to set `axial_pos_shape` to (512 * 1024). In the notebook I just used that to demonstrate how far the limits can be pushed for Reformer. Half a million token is extremely long and usually unnecessary. Make sure you have read and understood how AxialPostionEmbeddings work: https://huggingface.co/transformers/model_doc/reformer.html#axial-positional-encodings . For "normal" language modeling it might make much more sense to start from the Reformer-wiken8 model and finetune it: https://huggingface.co/google/reformer-enwik8<|||||>Greetings, Would fine tuning https://huggingface.co/google/reformer-enwik8 work normally with run_language_modeling.py script? Thanks<|||||>Hmm, for the most part but you will have to define your own tokenzer function as can be seen here: https://huggingface.co/google/reformer-enwik8#reformer-language-model-on-character-level-and-trained-on-enwik8 <|||||>So instead of sticking to the script, I would recommend slightly changing this notebook: https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb. Instead of creating the dataset by using a tokenizer, you should use the function linked above. Does that make sense? Also linking: https://github.com/huggingface/transformers/pull/4480. If someone has an easy script for Reformer Char LM it'd be great to post it here or add a notebook. <|||||>Ok, thanks. So the function _flatten_and_tokenize_ (in the notebook) shall be replaced by the _encode_ function (in the enwik8 model card), am I following right?<|||||>> I would not recommend to set `axial_pos_shape` to (512 * 1024). In the notebook I just used that to demonstrate how far the limits can be pushed for Reformer. Half a million token is extremely long and usually unnecessary. > I've been using 'google/reformer-crime-and-punishment' model from [https://huggingface.co/transformers/model_doc/reformer.html#reformermodelwithlmhead](url) I get this error after I padded the sequence lengths to be a multiple of least common multiple chunk_length 64. ``` ... for epoch in range(EPOCHS): print(f"EPOCH {epoch} started" + '=' * 30) for idx,article in tqdm_notebook(enumerate(article_loader)): article_tens = tokenizer.encode(article[0], return_tensors='pt').to(device) print(article_tens.shape) #multiple of least common multiple chunk_length 64. pads_to_be_filled=getNoOfPads(article_tens.size()[1]) padded_tens= torch.cat((article_tens[0],Variable(torch.zeros((pads_to_be_filled),dtype=torch.long).cuda())) ) print(padded_tens.unsqueeze(0).shape) outputs = model(padded_tens.unsqueeze(0), labels=padded_tens.unsqueeze(0))[0] ... ``` ``` EPOCH 0 started============================== 0/? [00:00<?, ?it/s] torch.Size([1, 131]) torch.Size([1, 192]) --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-11-81c445097515> in <module>() 29 print(padded_tens.unsqueeze(0).shape) 30 ---> 31 outputs = model(padded_tens.unsqueeze(0), labels=padded_tens.unsqueeze(0))[0] 32 print(outputs) 33 7 frames /usr/local/lib/python3.6/dist-packages/transformers/modeling_reformer.py in forward(self, position_ids) 127 reduce(mul, self.axial_pos_shape) == sequence_length 128 ), "If training, make sure that config.axial_pos_shape factors: {} multiply to sequence length. Got prod({}) != sequence_length: {}. You might want to consider padding your sequence length to {} or changing config.axial_pos_shape.".format( --> 129 self.axial_pos_shape, self.axial_pos_shape, sequence_length, reduce(mul, self.axial_pos_shape) 130 ) 131 if self.dropout > 0: AssertionError: If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 192. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape. ``` >If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 384. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape. So I guess that is because its default set to (512, 1024), and if so, how can I change it to a smaller value? ReformerConfig { "architectures": [ "ReformerModelWithLMHead" ], "attention_head_size": 64, "attention_probs_dropout_prob": 0.1, "attn_layers": [ "local", "lsh", "local", "lsh", "local", "lsh" ], "axial_norm_std": 1.0, "axial_pos_embds": true, "axial_pos_embds_dim": [ 64, 192 ], "axial_pos_shape": [ 512, 1024 ], "chunk_size_feed_forward": 0, "chunk_size_lm_head": 0, "eos_token_id": 2, "feed_forward_size": 512, "hash_seed": null, "hidden_act": "relu", "hidden_dropout_prob": 0.05, "hidden_size": 256, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": true, "layer_norm_eps": 1e-12, "local_attention_probs_dropout_prob": 0.05, "local_attn_chunk_length": 64, "local_num_chunks_after": 0, "local_num_chunks_before": 1, "lsh_attention_probs_dropout_prob": 0.0, "lsh_attn_chunk_length": 64, "lsh_num_chunks_after": 0, "lsh_num_chunks_before": 1, "max_position_embeddings": 524288, "model_type": "reformer", "num_attention_heads": 2, "num_buckets": [ 64, 128 ], "num_chunks_after": 0, "num_chunks_before": 1, "num_hashes": 1, "num_hidden_layers": 6, "output_past": true, "pad_token_id": 0, "task_specific_params": { "text-generation": { "do_sample": true, "max_length": 100 } }, "vocab_size": 320 } Given above is the default configuration of the model before training/finetuning > For "normal" language modeling it might make much more sense to start from the Reformer-wiken8 model and finetune it: https://huggingface.co/google/reformer-enwik8 Will try that too. Thank you. <|||||>yeah the google/crime-and-punishment is not a good model for fine-tuning. It assumes you use a sequence length of > 500K tokens, which is not really reasonable.<|||||>> Ok, thanks. So the function _flatten_and_tokenize_ (in the notebook) shall be replaced by the _encode_ > function (in the enwik8 model card), am I following right? exactly. You should be able to just enwik8 function I linked above. The enwik8 model has a maximum length of ~65K tokens, which is very long but very feasible for reformer.<|||||>> yeah the google/crime-and-punishment is not a good model for fine-tuning. It assumes you use a sequence length of > 500K tokens, which is not really reasonable. Oh okay. Thank you very much for the clarification. Will try finetuning reformer-enwik8.<|||||>It would be awesome if you could upload your training script here - people seem very interested in it :-) <|||||> @patrickvonplaten, Sure, will do when everything is sorted.<|||||>> > > > Ok, thanks. So the function _flatten_and_tokenize_ (in the notebook) shall be replaced by the _encode_ > > function (in the enwik8 model card), am I following right? > > exactly. You should be able to just enwik8 function I linked above. The enwik8 model has a maximum length of ~65K tokens, which is very long but very feasible for reformer. From the notebook am struggling to adapt the DataCollator, how to define it properly in this context? Thanks<|||||>Someone effectivelly fine tune on enwiki8 pre trained model? using colab with P100 gpu i was not able to load model yet due to memory limitation<|||||>Unfortunately facing the same issue now.<|||||>Can you add a link to your notebook here @lucashueda ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@lucashueda Did you manage to fine-tune enwiki8 pre trained model ? or other datasets ? Would you mind sharing your Colab? <|||||>@epetros did you manage to perform the fine-tuning ? <|||||>@patrickvonplaten any update on this? Os is there a notebook where we can pre-train this model from wiki or huge corpus ourselves and then fine-tune it to downstream tasks? <|||||>Hi, I'm trying to fine-tune the ReformerModelWithLMHead (google/reformer-enwik8) for NER. I used the padding sequence length same as in the encode method (max_length = max([len(string) for string in list_of_strings])) along with attention_masks. And I got this error: **ValueError:** If training, make sure that config.axial_pos_shape factors: (128, 512) multiply to sequence length. Got prod((128, 512)) != sequence_length: 2248. You might want to consider padding your sequence length to 65536 or changing config.axial_pos_shape. 1) When I changed the sequence length to 65536, my colab session crashed by getting all the inputs of 65536 lengths. 2) According to the second option(changing config.axial_pos_shape), I cannot change it. I would like to know, Is there any chance to change config.axial_pos_shape while fine-tuning the model? Or I'm missing something in encoding the input strings for reformer-enwik8? Are there any additional steps to forward the input to the model after encoding? Thanks!
transformers
4,564
closed
[Reformer] fix reformer num buckets
Fix automatic setting of `num_buckets` by making sure `num_buckets` is always a power of 2 and set as default. The idea behind this whole function is that `num_buckets` should not be set by the user, but calculated on the fly to a good value before training (`num_buckets` ~ 2 * sequence length / chunk length as recommend in the paper). This value will then be saved in the config and can be reapplied for inference.
05-24-2020 18:57:31
05-24-2020 18:57:31
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=h1) Report > Merging [#4564](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `14.28%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4564/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4564 +/- ## ========================================== - Coverage 77.87% 77.86% -0.02% ========================================== Files 123 123 Lines 20566 20569 +3 ========================================== Hits 16016 16016 - Misses 4550 4553 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4564/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4564/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `87.94% <14.28%> (-0.24%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4564/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4564/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ΓΈ)` | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=footer). Last update [a34a989...7b40493](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,563
closed
Decoding with DistilmBERT to generate text in different languages
Good day and congrats for your great library If I want to decode and get new generated text with the GPT2 heads, that works great like you suggest: ```py tokenizer = GPT2Tokenizer.from_pretrained('gpt2') input_ids = torch.tensor(tokenizer.encode("Once upon a time there was")).unsqueeze(0) model = GPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id) greedy_output = model.generate(input_ids, max_length=50) print("Output:\n" + 100 * '-') print(tokenizer.decode(greedy_output[0], skip_special_tokens=True)) ``` but my issue is that now I want to do the same but with the smaller simpler DistilmBERT model which is also multilingual in 104 languages, so I want to generate text in for example Spanish and English and with this lighter model. So I do this: ```py tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-multilingual-cased') model = DistilBertForMaskedLM.from_pretrained('distilbert-base-multilingual-cased') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 outputs = model(input_ids, masked_lm_labels=input_ids) loss, prediction_scores = outputs[:2] ``` but now, how do I get the continuation of the phrase at that point? I tried to apply tokenizer.decode with no luck there, thank you
05-24-2020 18:35:39
05-24-2020 18:35:39
So I can get the generation working well with distilgpt2, the thing is that I would like to do it multilingual using the light multilingual model DistilmBERT (distilbert-base-multilingual-cased), any tips? thank you :) ```py import torch from transformers import * from transformers import TFGPT2LMHeadModel, GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained('gpt2') input_ids = torch.tensor(tokenizer.encode("Once upon a time")).unsqueeze(0) model = GPT2LMHeadModel.from_pretrained("distilgpt2", pad_token_id=tokenizer.eos_token_id) greedy_output = model.generate(input_ids, max_length=50) #greedy search sample_outputs = model.generate( input_ids, do_sample=True, max_length=50, top_k=50, top_p=0.95, temperature=1, num_return_sequences=3 ) print("Output:\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True))) ```<|||||>Hi, I took the liberty of editing your comments with triple backticks ```py\`\`\` to be more readable. Unfortunately DistilmBERT can't be used for generation. This is due to the way the original BERT models were pre-trained, using masked language modeling (MLM). It therefore attends to both the left and right contexts (tokens on the left and right of the token you're trying to generate), while for generation the model only has access to the left context. GPT-2 was trained with causal language modeling (CLM), which is why it can generate such coherent sequences. We implement the `generation` method only for CLM models, as MLM models do not generate anything coherent. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,562
closed
implementation of transformers for abstractive summarization task
Hello, I am new to whole NLP world and PyTorch. I am trying to learn the concepts and that is taking some time for a rookie. I have a project to finish and I want to implement transformers & BERT on my abstractive summarization project. I tried to find some implementation tutorial on this topic but I could not find any. Do you guys have any suggestions about clear implementation of any pre-trained model that I can fine-tune on my dataset to get some solid results. I am not looking for this just for finishing up the project but also learn how to implement. Therefore, I need a clear tutorial. Data: I am using 0.25 of XSum dataset so I have 45k news and their one-sentence summary. Thank you in advance.
05-24-2020 17:28:58
05-24-2020 17:28:58
@sshleifer might be able to help you here. @sshleifer - hope it's fine that I link you here :-) <|||||>This might also help: https://github.com/huggingface/transformers/pull/4539/files?short_path=3a2ba7b#diff-3a2ba7b492f00029d14cec3994b73ac7<|||||>> This might also help: https://github.com/huggingface/transformers/pull/4539/files?short_path=3a2ba7b#diff-3a2ba7b492f00029d14cec3994b73ac7 Thank you very much. I will look and try to implement and let you know about the result!<|||||>> This might also help: https://github.com/huggingface/transformers/pull/4539/files?short_path=3a2ba7b#diff-3a2ba7b492f00029d14cec3994b73ac7 it seems working! thank you!
transformers
4,561
closed
Fix the example command for SQuAD
Issue #4549: added the missing argument `--do_lower_case` for reproducing the intended results.
05-24-2020 15:01:17
05-24-2020 15:01:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=h1) Report > Merging [#4561](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4561/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4561 +/- ## ========================================== - Coverage 77.87% 77.86% -0.02% ========================================== Files 123 123 Lines 20566 20566 ========================================== - Hits 16016 16013 -3 - Misses 4550 4553 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4561/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4561/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4561/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=footer). Last update [a34a989...b44c742](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Can you add the do_lower_case flag for all other instances in that README that use bert uncased? Thanks!<|||||>Sure. Presumably for the tf version, the script will handle the casings automatically.<|||||>From the code it would seem that way, but I am not sure actually. cc @thomwolf @LysandreJik: `do_lower_case` was missing in the commands to run squad with bert-base-uncased. Is this flag also necessary in the Tensorflow version? It's not present in the code, so I would assume not. The other changes LGTM!<|||||>Closed by #4245 (we still need to investigate why the lowercasing is not properly populated by the model's config)
transformers
4,560
closed
Albert Tokenizer hangs
# πŸ› Bug ## Information I am following the language modeling tutorial to train a LM on a simple wikipedia corpus from scratch. I am trying to use Albert instead of Roberta. As I couldn't find information on how to train an Albert Tokenizer from scratch, I'm loading the albert-base-v2 tokenizer. The Dataset creation doesn't work, it hangs for ages and when I stop it, I can see that it is always stuck in tokenization_albert.py, line 193: ```python outputs = "".join([c for c in outputs if not unicodedata.combining(c)]) ``` A week ago, it crashed consistently in this line due to large RAM allocations, but I can't reproduce that behaviour right now. ## To reproduce Steps to reproduce the behavior: ```python from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2") from transformers import TextDataset train_set = TextDataset( tokenizer=tokenizer, file_path="./drive/My Drive/datasets/simplewiki_train.txt", block_size=128, ) ``` ## Expected behavior I expected the tokenizer to run through in less than an hour for 100MB input. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.10.0 - Platform: Colab - Python version: 3.6 Is anyone else experiencing this? I read in another issue that Albert should work with run_language_modeling out of the box.
05-24-2020 14:38:51
05-24-2020 14:38:51
Does this also happen when you use the slow tokenizers? ```python tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2", use_fast=False) ```<|||||>Thanks for the suggestion @BramVanroy . I just tried this and the tokenizer has now been running for 70 minutes, so I think that's a yes, it also happens when I use slow mode.<|||||>cc @mfuntowicz AlbertTokenizer seems to hang in fast mode but not in slow mode<|||||>Hi, there is no fast mode for `AlbertTokenizer`, it's a SentencePiece based tokenizer which is not currently supported by `tokenizers`. Do you think you can find more information about where the tokenizer actually hang? Can you reproduce the behavior with a shorter input?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,559
closed
XLnet loss and accuracy not decreasing
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): XLnet base cased Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) Can check the notebook below: https://colab.research.google.com/drive/132r5kb1G5oG0yi-qnymBsMBPCGP5Gu85 Can only give access to a few people. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Binary classification. ## To reproduce Steps to reproduce the behavior: Preprocessing: ``` from transformers import XLNetTokenizer from keras.preprocessing.sequence import pad_sequences tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased') encoded_sent = tokenizer.encode( text, # Sentence to encode. add_special_tokens = True, ) MAX_LEN = 128 encoded_sent = pad_sequences([encoded_sent], maxlen=MAX_LEN, dtype="long", value=0, truncating="post", padding="post") attention_masks=[] att_mask = [int(token_id > 0) for token_id in encoded_sent[0]] attention_masks.append(att_mask) ``` Model definition: ``` from transformers import BertForSequenceClassification, XLNetForSequenceClassification, AdamW, BertConfig # bert = BertForSequenceClassification.from_pretrained( # "bert-base-uncased", # num_labels = 2, . # output_attentions = False, # output_hidden_states = False, xlnet = XLNetForSequenceClassification.from_pretrained('xlnet-base-cased', num_labels = 2, output_attentions = False, output_hidden_states = False, ) class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.features2 = xlnet self.softmax = nn.LogSoftmax(dim=1) def forward(self, x2, x3): x2 = x2.to(torch.int64) x2 = self.features2(x2,x3)[0] x = self.softmax(x2) return x model = MyModel() torch.cuda.empty_cache() model.to('cuda') criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer = optim.AdamW(model.parameters(), lr=0.0005) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Loss and accuracy should decrease but are not changing at all( both training and valid). This script worked while training BERT model. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: Latest - Platform: Colab - Python version: 3.6 - PyTorch version (GPU?): Latest - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
05-24-2020 14:31:15
05-24-2020 14:31:15
I recommend to change your preproccessing to: ```python from transformers import XLNetTokenizer from keras.preprocessing.sequence import pad_sequences tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased') encoded_sent = tokenizer.encode_plus( text, # Sentence to encode. return_tensors='pt', max_length=128) # ... output = model(**encoded_sent ) ``` However, the real problem is probably due to hyperparameters. You cannot simply use different models with the same hyperparameters and immediately expect results. You'll have to fiddle with the hyperparameters and find something that works for your case. <|||||>I am converting the text to tensor in a later step. I also tried changing the lr ranging from 0.05 to 5e-8 but still loss did not changing and also applied a lr sheduler. Maybe I should try other optimizers other than AdamW?<|||||>Since this is not a bug I am closing this. It is impossible for us to help with this any further since hyperparamter optimization is different for each task. This is something that you have to test yourself. For a starting point, you can have a look at Table 8 in [the original paper](https://arxiv.org/pdf/1906.08237.pdf) where they suggest some good hyperparameter settings. But again, even then it depends on your specific case what would help and what wouldn't. Try and test!
transformers
4,558
closed
Add DistilBERT to supported run_language_modeling models
As per the code, distilbert is indeed supported by the `run_language_modeling.py` script, even though the README states otherwise.
05-24-2020 14:25:25
05-24-2020 14:25:25
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=h1) Report > Merging [#4558](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4558/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4558 +/- ## ========================================== - Coverage 77.87% 77.86% -0.01% ========================================== Files 123 123 Lines 20566 20566 ========================================== - Hits 16016 16014 -2 - Misses 4550 4552 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ΓΈ)` | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=footer). Last update [a34a989...ce96482](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks! That's correct
transformers
4,557
closed
Cleaner warning when loading pretrained models
Give more explicit logging messages when using the various `from_pretrained` methods in the lib. Also makes these messages as `logging.warning` because it's a common source of silent mistakes. cc @BramVanroy Happy to improve the language further if people have advice.
05-24-2020 11:56:59
05-24-2020 11:56:59
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=h1) Report > Merging [#4557](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `86.95%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4557/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4557 +/- ## ======================================= Coverage 77.87% 77.87% ======================================= Files 123 123 Lines 20566 20580 +14 ======================================= + Hits 16016 16027 +11 - Misses 4550 4553 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <78.57%> (-0.63%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.70% <100.00%> (+0.03%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.56% <100.00%> (+0.02%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=footer). Last update [a34a989...0323876](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Good stuff!
transformers
4,556
closed
Added reference to use for citing this model
05-24-2020 10:02:00
05-24-2020 10:02:00
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=h1) Report > Merging [#4556](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **increase** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4556/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4556 +/- ## ========================================== + Coverage 77.87% 77.93% +0.05% ========================================== Files 123 123 Lines 20566 20566 ========================================== + Hits 16016 16028 +12 + Misses 4550 4538 -12 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.84% <0.00%> (-0.83%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `34.07% <0.00%> (+5.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=footer). Last update [a34a989...433d479](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,549
closed
Example script for SQuAD question answering unable to reproduce the claimed performance
# πŸ› Bug ## Information The example script for SQuAD question answering (`examples/question-answering/run-squad.py`) fails to produce the correct results as claimed in the tutorial. The correct performance is around f1 = 88.52, exact_match = 81.22 on SQuAD v1.1, but the script produces f1 = 81.97 and exact match = 73.80 instead. ## To reproduce Steps to reproduce the behavior: 1. Install with the latest commit (a34a989) 2. Download the SQuAD v1.1 dataset. 3. Run `examples/question-answering/run-squad.py`. with the exact same arguments as seen in the tutorial. ``` export SQUAD_DIR=/path/to/SQUAD python run_squad.py \ --model_type bert \ --model_name_or_path bert-base-uncased \ --do_train \ --do_eval \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ ``` Following is the final result. 05/24/2020 16:10:09 - INFO - __main__ - ***** Running evaluation ***** 05/24/2020 16:10:09 - INFO - __main__ - Num examples = 10789 05/24/2020 16:10:09 - INFO - __main__ - Batch size = 8 Evaluating: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1349/1349 [01:31<00:00, 14.81it/s] 05/24/2020 16:11:41 - INFO - __main__ - Evaluation done in total 91.079697 secs (0.008442 sec per example) 05/24/2020 16:11:41 - INFO - transformers.data.metrics.squad_metrics - Writing predictions to: out-noamp/predictions_.json 05/24/2020 16:11:41 - INFO - transformers.data.metrics.squad_metrics - Writing nbest to: out-noamp/nbest_predictions_.json 05/24/2020 16:12:09 - INFO - __main__ - Results: {'exact': 73.80321665089878, 'f1': 81.96651715123286, 'total': 10570, 'HasAns_exact': 73.80321665089878, 'HasAns_f1': 81.96651715123286, 'HasAns_total': 10570, 'best_exact': 73.80321665089878, 'best_exact_thresh': 0.0, 'best_f1': 81.96651715123286, 'best_f1_thresh': 0.0} <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The script should produce f1 = 88.52, exact_match = 81.22. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.10.0 - Platform: Linux-4.15.0-99-generic-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: True - Using distributed or parallel set-up in script?: False
05-24-2020 08:13:05
05-24-2020 08:13:05
My results are different as well: "exact_match": 71.92999053926206 "f1": 80.70949484221217 My guess is that this occurs because we are not using a fixed seed. The runs are not deterministic so difference _will_ occur.<|||||>Possibly but the difference of 7~8 points in f1 and EM scores is way above the usual variance due to random seeds.<|||||>Found the bug. `--do_lower_case` was missing in the script arguments. Now the results are pretty close to the ones mentioned in the tutorial. 05/24/2020 23:50:04 - INFO - __main__ - Results: {'exact': 80.26490066225166, 'f1': 88.01726518927101, 'total': 10570, 'HasAns_exact': 80.26490066225166, 'HasAns_f1': 88.01726518927101, 'HasAns_total': 10570, 'best_exact': 80.26490066225166, 'best_exact_thresh': 0.0, 'best_f1': 88.01726518927101, 'best_f1_thresh': 0.0}<|||||>> Possibly but the difference of 7~8 points in f1 and EM scores is way above the usual variance due to random seeds. Unfortunately not. Have a look at these experiments by my friends over at NLP Town. They did sentiment analyses and ran the experiments ten times (each time with a different seed). https://www.linkedin.com/posts/nlp-town_sentimentanalysis-camembert-xlm-activity-6605379961111007232-KJy3 That being said, I do think you are right, good catch! <|||||>Closing this b/c #4245 was merged (we still need to investigate why the lowercasing is not properly populated by the model's config)
transformers
4,548
closed
[WIP] Replace instances of `config.output_hidden_states` with function argument `output_hidden_states` in all possible models.
Attempts to close #3879 by refactoring `config.output_hidden_states` as an argument,`output_hidden_states`, to functions `forward()` and `call()`. Affects all PT and TF models that output hidden states. Currently it is failing the following tests: `run_tests_tf`, `run_tests_torch` and `run_tests_torch_and_tf` because they are still using `config.output_hidden_states`. Please advise on how I should go about testing this.
05-24-2020 07:56:03
05-24-2020 07:56:03
Hey @drjosephliu - thanks so much for opening this PR! Sorry for being lazy here - could you check out the comments I added in PR #4538 - I think they apply 1-to-1 the same way here.<|||||>No problem. I'm a bit busy this week, but will aim to get it done by the end of the week.<|||||>> No problem. I'm a bit busy this week, but will aim to get it done by the end of the week. Sure, take your time :-) <|||||>I'm just in the middle of fixing up the tests and I've noticed that the `ReformerModel` has a different method signature than other models because it takes in `do_output_hidden_states`: ``` def forward( self, input_ids=None, attention_mask=None, position_ids=None, head_mask=None, inputs_embeds=None, num_hashes=None, do_output_hidden_states=False, do_output_attentions=False, ): ``` Should I be converting it to `output_hidden_states` here?<|||||>> I'm just in the middle of fixing up the tests and I've noticed that the `ReformerModel` has a different method signature than other models because it takes in `do_output_hidden_states`: > > ``` > def forward( > self, > input_ids=None, > attention_mask=None, > position_ids=None, > head_mask=None, > inputs_embeds=None, > num_hashes=None, > do_output_hidden_states=False, > do_output_attentions=False, > ): > ``` > > Should I be converting it to `output_hidden_states` here? yes please!<|||||>Before starting on the TF implementation, it might be a good idea how it is handled in `modeling_tf_bert.py` of PR: #4538 :-) <|||||>So all pytorch tests are passing, but I still haven't really figured out the TF ones. I copied what you did for `output_attention_heads` applied to `output_hidden_states` to TF Bert almost verbatim, but I'm still getting some failing tests<|||||>Hey @drjosephliu, This looks great already :-) We will probably have a lot of conflicts with master once #4538 is merged (I should have thought about this before ... ). Would it be ok for you to wait a couple of days on this PR until we merge #4538. Then I can rebase this PR to master and it will be easier to work from then on :-) Can you do the following change to the branch `hidden_states` of your fork, so that I can commit directly to your branch? :-) https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork If this doesn't work, maybe just send me a collaboration invite to @patrickvonplaten :-) Looking forward to merge this soon :-) <|||||>Hey, the checkbox "Allow edits by maintainer" is already checked - I guess I should leave it checked right?<|||||>Hey @drjosephliu, I tried to rebase to main, but it's just not reasonable. We have to many merge conflicts with `output_attentions` (in every file in every forward function) and not just for one commit, but for 7 commits :-/ At this point it would be much faster to just open a new PR. I should have seen this coming, so that you guys would not have started at the same time :-/ Super sorry about that! There are two solutions: 1) You can open a new PR and sadly starting from scratch (Recommended): If you look at the merged PR here: https://github.com/huggingface/transformers/pull/4538, you can see that we actually decided to keep `config.output_attentions` for now and only overwrite it with the arguments of the forward function. This means that the high-level forward functions all have the argument `output_hidden_states = None`. 2) You can try to go through the whole rebase yourself (Not recommended): - In your repo if you run `git rebase main/master` from this branch, you see a bunch of merge conflicts arising (and that's just for the first commit of this branch). You could try to solve them correctly one-by-one, but you have to be careful to not introduce new bugs here. So I strongly advise against this. 3) You don't feel like doing the same work again (Very much understandable :D). I would totally understand it, if you don't want to do the same work again. In this case I would re-open the issue to the community or do it myself. I would totally understand this - we have other "good first issue" tags. I hope, you give it a try with 1) :-) Very sorry about the merge conflicts again - let me know what you think<|||||>Not a problem. I'm quite familiar with the codebase now so it shouldn't take too long.<|||||>Saw your new PR @drjosephliu ! Thanks a lot - it's great that you tackle this :-) I will take a look tomorrow at the new PR :-)
transformers
4,547
closed
LongformerTokenizerFast
This PR adds `LongformerTokenizerFast` by sub-classing `RobertaTokenizerFast`. @patrickvonplaten
05-24-2020 06:22:48
05-24-2020 06:22:48
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=h1) Report > Merging [#4547](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4547/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4547 +/- ## ======================================= Coverage 77.87% 77.87% ======================================= Files 123 123 Lines 20566 20569 +3 ======================================= + Hits 16016 16019 +3 Misses 4550 4550 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4547/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ΓΈ)` | | | [src/transformers/tokenization\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4547/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbG9uZ2Zvcm1lci5weQ==) | `100.00% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4547/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ΓΈ)` | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=footer). Last update [a34a989...453d496](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>LGTM, but adding @mfuntowicz here, since I'm not very familiar with the FastTokenizers. @mfuntowicz - do we also have to apply changes on the Rust side for this? <|||||>LGTM if the tokenizer doesn't have any different pre-processing / post-processing than the current Roberta Tokenizer πŸ‘ <|||||>Great I think it's good to merge then :-)
transformers
4,546
closed
Fix two bugs on MNLI dataset and SST-2 respectively.
The text index of test data of SST-2 are 1 rather than 0. The label of MNLI task has a tricky swap on MNLI dataset, which should also be involved in get_labels() of dataset for correctness.
05-24-2020 06:16:15
05-24-2020 06:16:15
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=h1) Report > Merging [#4546](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `42.85%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4546/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4546 +/- ## ========================================== - Coverage 77.87% 77.85% -0.02% ========================================== Files 123 123 Lines 20566 20568 +2 ========================================== - Hits 16016 16014 -2 - Misses 4550 4554 +4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.44% <0.00%> (-0.19%)` | :arrow_down: | | [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.36% <60.00%> (+0.20%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=footer). Last update [a34a989...6e60ed9](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>LGTM
transformers
4,545
closed
pass lowercase to fast tokenizer
https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/src/transformers/tokenization_gpt2.py#L338-L343 only passes four parameters without `lowercase` to https://github.com/huggingface/tokenizers/blob/704cf3fdd2f607ead58a561b892b510b49c301db/bindings/python/tokenizers/implementations/byte_level_bpe.py#L15
05-24-2020 05:42:15
05-24-2020 05:42:15
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,544
closed
seems that run_ner.py cannot handle the situation when example length exceed max_length?
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
05-24-2020 03:37:17
05-24-2020 03:37:17
Hi! Do you mind filling-in the template? It will help us help you better!<|||||>> Hi! Do you mind filling-in the template? It will help us help you better! OK!I think this will be a great practice for me<|||||>Great, thanks
transformers
4,543
closed
Automatically setting number of LSH buckets in Reformer may give invalid value
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Reformer transformers version: 2.9.0 When using a Reformer model such that `config.num_buckets` is set to `None` (as recommended), the model automatically determines the number of necessary buckets. However, depending on some hyperparameters, it may compute an odd number of buckets, which is invalid. It happens at this line, because of the +1 in the second element: https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/src/transformers/modeling_reformer.py#L541 This triggers the assertion in `_hash_vectors`: https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/src/transformers/modeling_reformer.py#L454 I think a simple fix is just to check if the number is odd, and add one in that case.
05-24-2020 03:33:53
05-24-2020 03:33:53
Hi @erickrf, Thanks a lot for catching the error. The linked PR should solve it by making sure `num_buckets` is always a power of 2.
transformers
4,542
closed
RuntimeError: The size of tensor a (1025) must match the size of tensor b (1024) at non-singleton dimension 3
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): gpt2 Language I am using the model on (English, Chinese ...): english The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I want to generate samples with length = 2000 ## To reproduce Steps to reproduce the behavior: 1. run the command python run_generation.py --model_type=gpt2 --model_name_or_path=<output_dir_of_finetuned_model> --length=2000 --num_return_sequences=10 --stop_token='<|endoftext|> ' ## Expected behavior The error that I am getting is w = torch.where(mask.bool(), w, self.masked_bias.to(w.dtype)) RuntimeError: The size of tensor a (1024) must match the size of tensor b (1025) at non-singleton dimension 3 ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Python 3.8 - Python version: - PyTorch version (GPU?): 10.1 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
05-24-2020 03:22:35
05-24-2020 03:22:35
Length of 2000 is too long for GPT2, this won't be possible. You can do two things here: 1) You chunk your generation which means that you first produce a length of up to 1000 and then use a bit of that (100 or so tokens) as context to generate the next 900 tokens and the same again until you hit 2000. 2) You can use this Reformer model: https://huggingface.co/google/reformer-enwik8 which can handle sequences up to 65000. Currently generation with Reformer is painfully slow though :-/ This should be improved in the coming weeks :-)
transformers
4,541
closed
'use_fast=True' results in 'TypeError' when trying to save tokenizer via AutoTokenizer
# πŸ› Bug ## Information Model I am using: all/any Language I am using the model on: English The problem arises when using: * [x] the official example scripts: `AutoTokenizer.from_pretrained([model], use_fast=True)` After updating to Transformers v2.10.0, when setting `use_fast=True` as in `tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)`, when trying to save the model by using `tokenizer.save_pretrained(path)` I get the following error and the process quits: ``` File "../python3.6/site-packages/transformers/tokenization_utils.py", line 1117, in save_pretrained vocab_files = self.save_vocabulary(save_directory) File "../python3.6/site-packages/transformers/tokenization_utils.py", line 2657, in save_vocabulary files = self._tokenizer.save(save_directory) File "../python3.6/site-packages/tokenizers/implementations/base_tokenizer.py", line 328, in save return self._tokenizer.model.save(directory, name=name) TypeError ``` When I omit the `use_fast=True` flag, the tokenizer saves fine. The tasks I am working on is: * [x] my own task or dataset: Text classification ## To reproduce Steps to reproduce the behavior: 1. Upgrade to `transformers==2.10.0` (requires `tokenizers==0.7.0`) 2. Load a tokenizer using `AutoTokenizer.from_pretrained()` with flag `use_fast=True` 3. Train for one epoch on any dataset, then try to save the tokenizer. ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The tokenizer/file should save into the chosen path, as it does with the regular tokenizer. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.10.0 - Platform: Linux-5.0.0-37-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.8 - PyTorch version (GPU?): 1.4.0+cu100 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
05-24-2020 03:21:06
05-24-2020 03:21:06
Hi @lingdoc, Thanks for reporting this. Unfortunately, I'm not able to reproduce currently ... Loading, then training and finally saving works as espected on my side, with various tokenizers. ```python tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True) # Some training ... tokenizer.save_pretrained("test_bert_tokenizer") ('test_bert_tokenizer\\vocab.txt', 'test_bert_tokenizer\\special_tokens_map.json', 'test_bert_tokenizer\\added_tokens.json') ``` Can you give the path you're trying to save ? Just to make sure we're ending having a `None` somewhere in the `save_pretrained` that would explain the `TypeError` raised. Thanks!<|||||>Hm, this is really strange. I can't reproduce it either. Maybe something was wonky with my virtualenv - now it works fine! Next time I'll try a restart & run before I post.<|||||>Oops, I closed it too soon. I'm still getting the issue. Models I have tried: `bert-base-uncased` `distibert-base-uncased` `google/electra-small-discriminator`<|||||>The path I am trying to save to is `"output/model_out"` - but it's generated using `Path()`, in case that makes a difference (not sure why it would make a difference for saving the `fast` tokenizer and not the regular one though).<|||||>Ok, that seems to be the issue after all - when I explicitly cast the `Path()`-generated path to `str`, it saves fine. I guess the regular tokenizer/save function does this somehow but the `fast` version doesn't..<|||||>Thanks for digging this further. I'll check what is the behaviour discrepancy between both version of the tokenizers when using `Path()` and I'll post here πŸ‘ <|||||>For what it is worth, I have this exact same issue with the `"distilroberta-base"` tokenizer when `use_fast=True`. Casting my `Path` object to a `str` solved the issue.
transformers
4,540
closed
InvalidArgumentError while using GRU layer in custom training loop
**System information** - TensorFlow version `2.1.0` - Python version: `3` - GPU model and memory: `NVIDIA Tesla P100` - CUDA Version: `10.1` - Environment: This happens both on Kaggle and Colab **Describe the current behavior** I'm trying to train a Hugging face transformer model (roBERTa base) with a custom training loop, and got the error below: ``` InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: InstantiateOptions.input_devices must have the same length as the number of arguments: input_devices length = 23 number of arguments = 24 [[{{node while/body/_1/StatefulPartitionedCall}}]] (1) Invalid argument: InstantiateOptions.input_devices must have the same length as the number of arguments: input_devices length = 23 number of arguments = 24 [[{{node while/body/_1/StatefulPartitionedCall}}]] [[while/body/_1/Adam/Cast_6/ReadVariableOp/_30]] 0 successful operations. 0 derived errors ignored. [Op:__inference_train_step_35635] Function call stack: train_step -> train_step ``` The thing is I can run the same model using `model.fit()` API, and this error only happens when I use a LSTM or GRU layer on top of the transformer **Describe the expected behavior** Training should go normal
05-23-2020 23:18:44
05-23-2020 23:18:44
This is not enough information to help us. Can you post a minimal reproducible example, or at least how you construct the model. Also show where the error is triggered. (And just my two cents, but adding RNNs on top of transformer-based models seems redundant... But I guess you can try it out!)<|||||>Hi @BramVanroy , thanks for the tips, much appreciated, I was just trying different things to see how it performs, this is the model architecture: ``` module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False) def model_fn(MAX_LEN): input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask') base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model") last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask}) x = layers.Bidirectional(layers.LSTM(128, return_sequences=True))(last_hidden_state) x = layers.Dropout(.1)(x) x_start = layers.TimeDistributed(layers.Dense(1))(x) x_start = layers.Flatten()(x_start) y_start = layers.Activation('softmax', name='y_start')(x_start) x_end = layers.TimeDistributed(layers.Dense(1))(x) x_end = layers.Flatten()(x_end) y_end = layers.Activation('softmax', name='y_end')(x_end) model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end]) return model ``` And I was using it for a QA problem, so I also did: ``` model.compile(optimizer, loss={'y_start': losses.CategoricalCrossentropy(), 'y_end': losses.CategoricalCrossentropy()}) ``` The tricky part is that is jsut happens inside a custom training loop. Here are some of the code I have used. ``` # Step functions @tf.function def train_step(data_iter): def train_step_fn(x, y): with tf.GradientTape() as tape: probabilities = model(x, training=True) loss_start = loss_fn_start(y['y_start'], probabilities[0]) loss_end = loss_fn_end(y['y_end'], probabilities[1]) loss = tf.math.add(loss_start, loss_end) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) # update metrics train_acc_start.update_state(y['y_start'], probabilities) train_acc_end.update_state(y['y_end'], probabilities) train_loss.update_state(loss) train_loss_start.update_state(loss_start) train_loss_end.update_state(loss_end) for _ in tf.range(step_size): strategy.experimental_run_v2(train_step_fn, next(data_iter)) loss_fn_start = losses.categorical_crossentropy loss_fn_end = losses.categorical_crossentropy train_acc_start = metrics.CategoricalAccuracy() train_acc_end = metrics.CategoricalAccuracy() train_loss = metrics.Sum() train_loss_start = metrics.Sum() train_loss_end = metrics.Sum() ``` Let me know if you need any more information.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,539
closed
Add BART fine-tuning summarization community notebook
An example of how to train, evaluate, deploy a BART summarization model with fastai using the blurr library
05-23-2020 21:15:38
05-23-2020 21:15:38
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=h1) Report > Merging [#4539](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4539/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4539 +/- ## ========================================== - Coverage 77.87% 77.87% -0.01% ========================================== Files 123 123 Lines 20566 20566 ========================================== - Hits 16016 16015 -1 - Misses 4550 4551 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4539/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4539/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ΓΈ)` | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=footer). Last update [a34a989...e8c7951](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Awesome thanks for the great notebook. Did a tiny change in the github link :-)
transformers
4,538
closed
[All models] Extend config.output_attentions with output_attentions function arguments
Attempts to close #3880 by refactoring `config.output_attentions` => `output_attentions` in `forward()`/`call()` functions **UPDATE** from Patrick @thomwolf @LysandreJik This PR adds the argument `output_attentions` to every forward function for more flexibility. This PR makes it possible to easily switch the output attention on/off without having to instantiate a new model every time. The logic is the following. If 'output_attentions` is configured in the `forwrd()` fn => use that. If not => use `config.output_attentions`. IMPORTANT: This PR does **not** change backward compatibility since output_attentions can still be configured using the config. TESTS: An additional test is added to the `test_output_attentions()` common test. FUTURE PR: - [ ] Clean the documentation. We still need to add this argument to the docs of all models and make sure the docs are clean. Lemme know @Bharat123rox if you want to tackle this in a new PR or if I should do it :-) It's not the most interesting PR, so I fully understand if you don't feel like doing it anymore ;-)
05-23-2020 17:49:56
05-23-2020 17:49:56
Also removed the tests for `output_attentions` since all of them were fetching values from the config<|||||>@Bharat123rox - that's awesome, thanks a lot! It's a lot of manual work, but it will be super useful once it's merged :-) I added a bunch of comments - let me know if something is unclear. Regarding the tests, let's try to first make all torch tests pass and then check the TF tests. Regarding the torch tests: Most tests that fail are those where you removed `config.output_attentions` in the test, but didn't set `output_attentions=True` for the forward call. These tests previously were outputting the attentions but don't do this anymore. You should fix these tests if you set `output_attentions=True` in the forward pass. Looking forward to have this merged soon :-) Let me know if something is unclear!<|||||> Now, most of the tests are giving new `AssertionError`<|||||>OK! Let's maybe try first to fix all the test `test_attention_outputs` tests. You can run this test for a specific model using the following command: ``` pytest tests/test_modeling_openai.py::OpenAIGPTModelTest::test_attention_outputs ``` I fixed this test as an example for `openai` on this commit: https://github.com/huggingface/transformers/pull/4597/commits/e8efd72fce1be304043863cbab4cd7a61a39e434 It's gonna be a longer process to fix all tests. Let's try to start with the `test_attention_outputs` tests for all PyTorch models :-) Btw, can you do the following change to the branch `outputattentions` of your fork, so that I can commit directly to your branch? :-) https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork<|||||>https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork was not clear as I've already checked the "Allow edits by maintainers" option, however I have sent a collaboration invite to you @patrickvonplaten which I think should give enough permissions<|||||>@patrickvonplaten @thomwolf please help me in fixing the remaining PyTorch and TensorFlow test failures, they are of various kinds, mostly `AssertionErrors`<|||||>This is really great work! I will take a look at the failing tests :-) It would be great if you can rename some of the variables as mentioned above.<|||||>Ok great, I can take a look into the remaining tests now :-) <|||||>> Ok great, I can take a look into the remaining tests now :-) Yes, please do, thank you! there are only 3 Assertion failures in Torch and hopefully all failures in TF are also similar 🀞 <|||||>Hey @Bharat123rox, I think from the PyTorch side we are ready now :-) Regarding the TF side, my last commit shows how to implement it for TF Models - could you give it a try for the other models? You always have to pay attention to points 1) - 3) as mentioned above to make sure that the TF Models can be trained, complised and serialized with keras. Be careful to make a `git fetch` and `git pull` on your branch now before continuing to work since I added two commits. Let me know if you have any questions! :-) Really great work so far, I think we are almost finished :-) <|||||>@patrickvonplaten Most of the TF tests are fixed, and the remaining seem to be different `AssertionErrors`, please help with the remaining TF test failures<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=h1) Report > Merging [#4538](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c58e6c129a153ca1a5021e5d7e642d00bf011e20&el=desc) will **increase** coverage by `1.39%`. > The diff coverage is `91.38%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4538/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4538 +/- ## ========================================== + Coverage 74.52% 75.91% +1.39% ========================================== Files 128 128 Lines 21497 21515 +18 ========================================== + Hits 16021 16334 +313 + Misses 5476 5181 -295 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `22.11% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `25.65% <0.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `74.74% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.85% <73.33%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.33% <75.00%> (-0.06%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `80.30% <76.19%> (-0.24%)` | :arrow_down: | | [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.00% <80.00%> (+0.12%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `93.71% <86.66%> (+0.01%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.57% <88.88%> (-0.31%)` | :arrow_down: | | ... and [33 more](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=footer). Last update [c58e6c1...b541a08](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I got an email with a comment from @patrickvonplaten (which I can't find) expressing the opinion that >IMO use_cache, output_attentions and output_hidden_states all are not really parameters of the model or its decoding strategy, but switches for all (or some for use_cache)models that can be turned on and off, but essentially don't influence the model's output logits. In contrast the other config attributes ("max_length", "do_sample", ...) do influence the output of the model and therefore should stay in the config. Therefore I would be ok with removing use_cache, output_attentions and output_hidden_states completely from the config. And I completely agree with that conclusion! We should make sure to highlight it in release notes.<|||||>> I got an email with a comment from @patrickvonplaten (which I can't find) expressing the opinion that > > > IMO use_cache, output_attentions and output_hidden_states all are not really parameters of the model or its decoding strategy, but switches for all (or some for use_cache)models that can be turned on and off, but essentially don't influence the model's output logits. In contrast the other config attributes ("max_length", "do_sample", ...) do influence the output of the model and therefore should stay in the config. Therefore I would be ok with removing use_cache, output_attentions and output_hidden_states completely from the config. > > And I completely agree with that conclusion! We should make sure to highlight it in release notes. @sshleifer I just deleted this comment :D I rethought this a bit. I think the better solution is what we have now: Have `output_attentions` in both the config and as a forward argument. This way we can still use the keras serialize function, don't break backward compatibility and have the same logic as we do in the generate() method. <|||||>@Bharat123rox - thanks a million for your work here! It was a lot of manual work in a lot of files! This PR is VERY useful for the library!
transformers
4,537
closed
DOC: Make `import torch` explicit for "Quick tour TF 2.0" example
I tried to run the Quick Tour example with only the `tensorflow` and the `transformers` imports (as shown literally in the code snippet), and _obviously_ (in hint sight) this fails with: ``` pytorch_model = BertForSequenceClassification.from_pretrained('./models/', from_tf=True) NameError: name 'BertForSequenceClassification' is not defined ``` The trivial fix was to add `import torch` to the snippet. When running all examples in sequence, this is not an issue, but I was running the `tensorflow 2` example in a separate project. Adding this line may avoid this confusion for the next newcomer :-)
05-23-2020 17:44:17
05-23-2020 17:44:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=h1) Report > Merging [#4537](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4537/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4537 +/- ## ========================================== - Coverage 77.87% 77.86% -0.01% ========================================== Files 123 123 Lines 20566 20566 ========================================== - Hits 16016 16014 -2 - Misses 4550 4552 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4537/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4537/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4537/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ΓΈ)` | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=footer). Last update [a34a989...fc189e7](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,536
closed
resize_token_embeddings not implemented for TFGPT2LMHeadModel
I am using a TFGPT2LMHeadModel pretrained model and new special tokens to the gpt tokenizer. However, the method resize_token_embeddings is not implemented in all gpt2 tf models. Will it be added? Or are there any workarounds? Thank you!
05-23-2020 10:00:21
05-23-2020 10:00:21
Hi @virginianegri, Could you specific for which model the method `resize_token_embeddings` does not work? Can you add a code snippet that reproduces the error?<|||||>The model is the TFGPT2LMHeadModel. This is my code: ``` from transformers import GPT2Tokenizer, TFGPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') special_tokens_dict = {'cls_token': '<CLS>'} num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) gpt2 = TFGPT2LMHeadModel.from_pretrained('gpt2') gpt2.resize_token_embeddings(len(tokenizer)) ``` When running the resize_token_embeddings method it launches a NotImplementedError<|||||>Yes we should implement this soon!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,535
closed
How to speed up inference step in BertQuestionAnswering?
I'm working on a QA system which uses a pre-trained BertQA model. At this point, even if I use GPU, the step generating start_scores and end_scores for a set of 20 candidate passages till takes a few seconds which is the bottleneck of my application. I just wonder do we have any strategies/tricks to speed up this step? So far, it seems using multiple GPUs at the inference step does not help at all. Any advice greatly appreciated!
05-23-2020 08:01:40
05-23-2020 08:01:40
To improve inference speed you can use ONNX (also see here: https://github.com/huggingface/transformers/issues/260). In addition, you can opt for a distilled model rather than the full model. <|||||>Hi! I've used this: distilbert-base-uncased-distilled-squad | Β  -- | -- or distilbert-base-cased-distilled-squad | Β  -- | -- It improved quite a bit! <|||||>@ZordoC how much speed improvement did you observe?
transformers
4,534
closed
DOC: Fix typos in modeling_auto
Fix typo of word `dictionnary` => `dictionary`
05-23-2020 06:01:17
05-23-2020 06:01:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=h1) Report > Merging [#4534](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e19b978151419fe0756ba852b145fccfc96dbeb4&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4534/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4534 +/- ## ========================================== - Coverage 77.86% 77.86% -0.01% ========================================== Files 123 123 Lines 20566 20566 ========================================== - Hits 16014 16013 -1 - Misses 4552 4553 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4534/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.57% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4534/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=footer). Last update [e19b978...847fb92](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,533
closed
Add nn.Module as superclass
Add nn.Module as superclass of `MMBTModel` to fix #4532
05-23-2020 04:36:27
05-23-2020 04:36:27
Are there upstream issues? I can't see how my one-liner is breaking these tests<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=h1) Report > Merging [#4533](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e19b978151419fe0756ba852b145fccfc96dbeb4&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4533/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4533 +/- ## ======================================= Coverage 77.86% 77.87% ======================================= Files 123 123 Lines 20566 20566 ======================================= + Hits 16014 16015 +1 + Misses 4552 4551 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/4533/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `22.11% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4533/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ΓΈ)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4533/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=footer). Last update [e19b978...42d544b](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Re-ran the tests and they pass, this was a transient (connectivity?) error. This PR looks reasonable to me. I'll just cc @suvrat96 for information
transformers
4,532
closed
MMBT doesn't inherit from nn.Module
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): MMBT Language I am using the model on (English, Chinese ...): not related The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Minimal reproduction: ```python from transformers import MMBTConfig, MMBTModel, AutoConfig, AutoModel electra_config = AutoConfig.from_pretrained("google/electra-small-discriminator") mmbt_config = MMBTConfig(electra_config) model = AutoModel.from_config(electra_config) mmbt = MMBTModel(mmbt_config, model, None) mmbt() ``` output: ``` Traceback (most recent call last): File "mmbt_debug.py", line 11, in <module> mmbt() TypeError: 'MMBTModel' object is not callable ``` You can see in the [source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_mmbt.py#L152) that it's currently only inheriting from `ModuleUtilsMixin`, but not `torch.nn.Module` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> We should be seeing a downstream error since I didn't pass in a real modal encoder or any input. It should at least call `forward()` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.9.1 (also tried 2.10.0) - Platform: Darwin-19.4.0-x86_64-i386-64bit - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no (doesn't matter) - Using distributed or parallel set-up in script?: no (doesn't matter)
05-23-2020 04:30:04
05-23-2020 04:30:04
transformers
4,531
closed
Fix add_special_tokens on fast tokenizers
Fix #4457 By using `flatten`, the following ``` dict_values(['[EOS]', '[BOS]']) ``` was being transformed into this: ``` ['[', 'E', 'O', 'S', ']', '[', 'B', 'O', 'S', ']'] ```
05-23-2020 01:01:33
05-23-2020 01:01:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=h1) Report > Merging [#4531](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e19b978151419fe0756ba852b145fccfc96dbeb4&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4531/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4531 +/- ## ========================================== + Coverage 77.86% 77.88% +0.01% ========================================== Files 123 123 Lines 20566 20570 +4 ========================================== + Hits 16014 16020 +6 + Misses 4552 4550 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.55% <100.00%> (+0.04%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=footer). Last update [e19b978...afcf0c9](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,530
closed
Tensorflow improvements
Hello, Here a quite big PR that propose the following updates: - Loss computation is now attached to their respective class, such as PyTorch. - Remove useless `mode` and `loss_name` parameters for the TF Trainer. - Add missing task models to different Transformers - Bugfix on T5 keras serialization + tests - Add tests for TF Flaubert and XLM-Roberta - Bugfix in TF Trainer for Tensorflow 2.2 Reviews are welcome :) /cc @julien-c @LysandreJik @thomwolf
05-22-2020 22:45:50
05-22-2020 22:45:50
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=h1) Report > Merging [#4530](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d976ef262e0b2c52363d201b2e14e5ecc42abbb3&el=desc) will **increase** coverage by `0.38%`. > The diff coverage is `41.45%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4530/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4530 +/- ## ========================================== + Coverage 75.63% 76.01% +0.38% ========================================== Files 128 128 Lines 20979 21417 +438 ========================================== + Hits 15867 16280 +413 - Misses 5112 5137 +25 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <ΓΈ> (ΓΈ)` | | | [src/transformers/training\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `51.16% <ΓΈ> (-4.16%)` | :arrow_down: | | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `18.86% <17.94%> (+0.94%)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `76.10% <27.47%> (-14.30%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `80.53% <27.50%> (-9.80%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `82.88% <32.00%> (-12.24%)` | :arrow_down: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `74.74% <34.21%> (-25.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `91.17% <38.70%> (-7.89%)` | :arrow_down: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.39% <45.45%> (-3.30%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <50.00%> (-1.60%)` | :arrow_down: | | ... and [22 more](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=footer). Last update [d976ef2...5b456e2](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Some commits are missing... I think it is due to the high number of error rate from Github.<|||||>Thanks @LysandreJik for your constructive comments! For the second point, before to answer in order to be sure, you mean that it would be more convenient that the output of the `call(...)` methods in the TF tasks model returns the same tuple `(loss), logits, (hidden_states), (attentions)` than the `forward(...)` methods in PT tasks model? <|||||>Yes, that's what I mean. I think having this to be the same as the PyTorch API would make sense. It wouldn't be a breaking change either, as it would require the `labels` to be passed to the model. I think doing this could still leverage Mixins, by calling a `self._compute_loss` or `self.compute_loss` if we want to expose this method as well. I have no strong opinion on that last item.<|||||>Ok, indeed makes sense and I don't think it is a problem to do that way, I will work on this today to see if there is any issue that would not allow us to do that.<|||||>I agree with @LysandreJik's 2nd point – maybe we can even take advantage of this to implement named tuples for TF models output, like @thomwolf and @patrickvonplaten intend to do for PyTorch (as it's going to be a breaking change in TF models anyways, maybe we can do this at the same time?)<|||||>Since my last commit, now the TF models return the loss such as the PT ones if the labels are given. About the named tuples, looks to be a good idea indeed, but I think we should implement this in another PR in order to release this in same time than for PT. No?<|||||>> About the named tuples [...] we should implement this in another PR in order to release this in same time than for PT. No? Yes, makes sense!<|||||>Ok, looks good to me, I have tested the new models with different examples that use the trainer and they all work, tests looks to be ok as well except the quality one that I don't know how to fix :smile: <|||||>A more general question regarding training in TensorFlow (I'm not super familiar with TF 2.0 training, so I'm asking primarily to learn a bit :-) ): I remember that when TF 2.0 was not out, most people used Keras to train a model with `model.fit(x_train, y_train)` => is this still the case? or are people more and more switching to the TF 2.0 training style as shown here: https://www.tensorflow.org/tutorials/quickstart/advanced and which basically consists of using `optimizer.apply_gradients(zip(gradients, model.trainable_variables))`. This is also what we do in the TF trainer right? Was it possible and recommended to train transformer models with keras' `model.train()` before TF Trainer and is it still possible now?<|||||>This is a good question! Short answer: yes it is still possible but witthout any gradient accumulation, that's mostly why the trainer uses the advanced training of TensorFlow. I'm currently preparing a next PR that will integrate the new `Model.train_step` feature added in [TF 2.2](https://github.com/tensorflow/tensorflow/releases/tag/v2.2.0). Basically this update allows you to create your own train step, and then integrate the missing gradient accumulation but this new PR will be only for TF >= 2.2.<|||||>@patrickvonplaten It was possible and we definitely aim to keep compatibility with keras' `fit` method. We don't have many tutorials that cover it, though, having some would probably make it easier for new users coming from Keras to use our lib. @julien-c, we've had the offline approval from @thomwolf, feel free to merge when you want. Glad to welcome this in the library!<|||||>Just tweaked the training_args.logging_dir to keep the same default as pytorch (I like that it creates a new subfolder each time you relaunch a training) Great job @jplu, thank you πŸ’ͺ
transformers
4,529
closed
Minor correction in Roberta Model docs, Roberta doesn't use NSP
In the roberta model docs https://huggingface.co/transformers/model_doc/roberta.html >pooler_output (tf.Tensor of shape (batch_size, hidden_size)): Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during Bert pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. Roberta uses something else though >FULL-SENTENCES: Each input is packed with full sentences sampled contiguously from one or more documents, such that the total length is at most 512 tokens. Inputs may cross document boundaries. When we reach the end of one document, we begin sampling sentences from the next document and add an extra separator token between documents. We remove the NSP loss.
05-22-2020 22:41:54
05-22-2020 22:41:54
You're absolutely right. Feel free to submit a PR to rectify this!<|||||>> You're absolutely right. RoBERTa swaps NSP for sentence order prediction (SOP). I think SOP was introduced into Albert >To further improve the performance of ALBERT, we also introduce a self-supervised loss for sentence-order prediction (SOP). SOP primary focuses on inter-sentence coherence and is designed to address the ineffectiveness (Yang et al., 2019; Liu et al., 2019) of the next sentence prediction (NSP) loss proposed in the original BERT. https://arxiv.org/pdf/1909.11942.pdf So it looks like the Albert document needs to be changed as well https://huggingface.co/transformers/model_doc/albert.html#transformers.AlbertModel.forward Roberta uses something called FULL-SENTENCES >FULL-SENTENCES: Each input is packed with full sentences sampled contiguously from one or more documents, such that the total length is at most 512 tokens. Inputs may cross document boundaries. When we reach the end of one document, we begin sampling sentences from the next document and add an extra separator token between documents. We remove the NSP loss. https://arxiv.org/pdf/1907.11692.pdf It sound like NSP isn't replaced, the task/training objective is removed altogether. So would that mean that that the linear layer which processes the CLS token outputs is untrained? It sounds like it, but I am not 100% sure. This is the linear layer I am talking about >Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pre-training. - >Feel free to submit a PR to rectify this! Sure would love to. Would need to 100% figure out if the aforementioned roberta tanh ff layer is trained, or if it's just random initialization. Are the docs on Github? I tried looking around, and found these https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/roberta.rst https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/albert.rst But it doesn't seem to be the full documentation of the models. I also tried looking up "next sentence prediction" in the repo but only found comments for the model code, which I can also update in the PR. <|||||>I sent one of the authors an email asking about the layer, just wanted to be 100% sure before I make a PR. <|||||>> I think SOP was introduced into Albert Oof, sorry for the slip up. I've been working with different models these days so I sometimes mix 'em up. The docs are generated from docstrings in the code. So you seem to be looking for this: https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/src/transformers/modeling_tf_roberta.py#L194-L201 and for Albert: https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/src/transformers/modeling_tf_albert.py#L686-L692 You can check which weights are (not) loaded by setting the logger to level INFO. In such a case, you'll see a message of the layers whose layers were not loaded. ```python from transformers import BertForNextSentencePrediction import logging if __name__ == '__main__': logging.basicConfig(level=logging.INFO) model = BertForNextSentencePrediction.from_pretrained('bert-base-cased') ``` As the last line in the log, you'll see: > Weights from pretrained model not used in BertForNextSentencePrediction: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']<|||||>Thanks, will use this info. For roberta it looks like all the weights are loaded; I didn't see a message about any weights not being loaded. I was expecting this since the architecture is the same, just the training is different. Just waiting to hear back if the tanh layer is untrained in roberta. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Ah, I messaged one of the authors but didn't hear anything back. But I'm pretty sure by now that there is no training of the pooler layer, so I'll start on an update. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Whoops, sort of fell off this. Will start looking into this soon. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,528
closed
Warn the user about max_len being on the path to be deprecated.
Makes it clear the parameter `model_max_length` is preferred over `max_len` by writting a warning to the logger. https://github.com/huggingface/transformers/issues/4527
05-22-2020 21:13:30
05-22-2020 21:13:30
transformers
4,527
closed
Tokenizers bug: version 2.10 doesn't honor `max_len` when instantiating a pretrained model
# πŸ› Bug ## Information Hello! I've just upgraded from Transformers 2.8 to Transformers 2.10, and noticed that parameter `max_len` is not properly honored when instantiating a pretrained model. For example, in Transformer 2.8.0, I was able to limit the length of a tokenized sequence as follows: ```python import transformers >>> tok = transformers.RobertaTokenizer.from_pretrained('roberta-base', max_len=16) >>> tok.encode('This is a sentence', pad_to_max_length=True) [0, 152, 16, 10, 3645, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] >>>print(tok.max_len) 16 ``` However, on version 2.10, `max_len` is ignored when loading a pretrained tokenizer: ```python import transformers >>> tok = transformers.RobertaTokenizer.from_pretrained('roberta-base', max_len=16) >>> tok.encode('This is a sentence', pad_to_max_length=True) [0, 152, 16, 10, 3645, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...] # 512 tokens >>>print(tok.max_len) 512 ``` This bug can be temporary solved by using `model_max_length` instead of `max_len`, but it broke the all my scripts that relied on that attribute. It seems that this issue was introduced in a recent change in [`tokenization_utils.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py) (line 825): ```python # For backward compatibility we fallback to set model_max_length from max_len if provided model_max_length = model_max_length if model_max_length is not None else kwargs.pop("max_len", None) ``` This compatibility is not guaranteed if the pretrained model contains `model_max_length` among its parameters, but `max_len` is specified in `from_pretrained`. Model I am using (Bert, XLNet ...): As far as I can tell, this affects all pretrained models. Observed on BERT, RoBERTa, and DistilBERT. Language I am using the model on (English, Chinese ...): As far as I can tell, this affects all pretrained models. Observed on English. The problem arises when using: * [ ] the official example scripts * [x] my own modified scripts: See above. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: * [x] my own task or dataset: It's a classification task. ## To reproduce See above. ## Expected behavior See above. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.10.0 - Platform: Linux-4.15.0-1060-aws-x86_64-with-debian-buster-sid - Python version: 3.6.5 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes; 4 x Tesla V100 - Using distributed or parallel set-up in script?: parallel, but not relevant
05-22-2020 20:27:39
05-22-2020 20:27:39
Additional info: I ran git-blame, and determined that the change was introduced in PR [#3706](https://github.com/huggingface/transformers/pull/3706). <|||||>Hi @soldni, thanks for reporting the issue. The behavior you mention can now be achieved through: ```python >>> tok.encode('This is a sentence', pad_to_max_length=True, max_length=16) [0, 152, 16, 10, 3645, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` Also, please note `RobertaTokenizer` now has it "fast" counterpart, `RobertaTokenizerFast` which is implemented in Rust and can greatly improve the performances of the tokenizer. API stays the same between both implementations. If I'm not mistaken, the name was changed because it was misleading in the context of generation (i.e. `generate(...)`). Morgan<|||||>Hi @mfuntowicz! Thank you for the quick response! The issue remains that the tokenizer fails to initialize properly without raising an error. I guess I don't understand why `max_len` is still supported in some situations, but not others. I would have been fine with an error being raised, but hunting for this issue took quite a bit of time. -Luca <|||||>You're right about the conflict if both are provided. I've opened a PR to at least write a warning the `max_len` parameter is being deprecated and `model_max_length` is now preferred.<|||||>Awesome! In the meantime, I've updated my code as you recommended. Thanks again for the super quick response on this. -Luca <|||||>@soldni I've fixed the issue when both are provided in the same PR, it will be included in the next patch release. Thanks for reporting! I'm closing, feel free to reopen if needed πŸ‘ Morgan
transformers
4,526
closed
link to paper was broken
changed from https://https://arxiv.org/abs/2001.04451.pdf to https://arxiv.org/abs/2001.04451.pdf
05-22-2020 19:09:10
05-22-2020 19:09:10
Thanks!
transformers
4,525
closed
Error in Longformer attention mask using apex mixed precision
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Longformer Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Install latest transformers (2.10.0) and apex (0.1) 2. Code: ``` import torch from transformers import LongformerTokenizer, LongformerModel, LongformerConfig from apex import amp tokenizer = LongformerTokenizer.from_pretrained('longformer-base-4096') config = LongformerConfig.from_pretrained('longformer-base-4096') model = LongformerModel.from_pretrained('longformer-base-4096').cuda() optimizer = torch.optim.Adam(model.parameters(), lr=1e-5) model, optimizer = amp.initialize(model, optimizer, opt_level='O1') # toy input example inputs = torch.randint(config.vocab_size, (1, 1024)).cuda() # randomly select tokens with sequence length 1024 mask = torch.ones(1, 1024).cuda() # set mask for every token to local attention mask[0] = 2. # global attention for first token outputs = model(inputs, attention_mask=mask) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Error message: ``` File "/home/miniconda3/envs/test_transformer/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 374, in forward attn[extra_attention_mask_nonzeros[::-1]] = nonzero_selected_attn.view( RuntimeError: expected dtype Half but got dtype Float ``` `attn` is half precision but is assigned a tensor that is casted into single precision `.type_as(hidden_states)` in line 376 . ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.10.0 - Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
05-22-2020 18:59:25
05-22-2020 18:59:25
Good catch! I guess that soon you'd want to move over PyTorch's built-in AMP which takes care of this automatically (I _think_), but for the time being your suggestion is a good fix. You can submit a PR if you want!<|||||>I think this is solved with the PR #4574 no?
transformers
4,524
closed
Codecov migration to marketplace app
Hi, Tom from Codecov here. We noticed that you are using our app with high frequency, and we’re so excited to see that! However, because you are not using our app, you may have experienced issues with uploading reports or viewing coverage information. This is due to rate-limiting from GitHub. **In order to prevent any future outages, we ask that you move over to our GitHub marketplace app: https://github.com/marketplace/codecov.** Let me know if you have any questions, or if I can help at all with this process.
05-22-2020 18:16:23
05-22-2020 18:16:23
cc @LysandreJik @thomwolf <|||||>Hi @thomasrockhu, indeed we've faced such issues before. We'll take a look, thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,523
closed
Can't reproduce export to onnx with custom bert model
# πŸ› Bug I try to run onnx export on a custom bert model, but during inference I get the following error. I share a google colab with the minimum changes to reproduce. All changes are marked with a `# CHANGE` comment. https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH?usp=sharing ``` InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Gather node. Name:'Gather_32' Status Message: indices element out of data bounds, idx=1 must be within the inclusive range [-1,0] ``` ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): None, I'm using a custom bert model, and for this bug report I'm using a random bert model. The problem arises when using: The official example notebook: https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb ## To reproduce Steps to reproduce the behavior: Run the convert to onxx script with a custom bert model. I've made a copy of the official notebook with the minimum changes required to illustrate the problem here: https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH?usp=sharing ```python --------------------------------------------------------------------------- InvalidArgument Traceback (most recent call last) <ipython-input-12-1d032f1e9ad0> in <module>() 9 10 # Run the model (None = get all the outputs) ---> 11 sequence, pooled = cpu_model.run(None, inputs_onnx) 12 13 # Print information about outputs /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/session.py in run(self, output_names, input_feed, run_options) 109 output_names = [output.name for output in self._outputs_meta] 110 try: --> 111 return self._sess.run(output_names, input_feed, run_options) 112 except C.EPFail as err: 113 if self._enable_fallback: InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Gather node. Name:'Gather_32' Status Message: indices element out of data bounds, idx=1 must be within the inclusive range [-1,0] ``` ## Expected behavior Get pooled and sequence output of bert model. ## Environment info - `transformers` version: 2.10.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0+cu101 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
05-22-2020 15:17:40
05-22-2020 15:17:40
Pinging @mfuntowicz, chief onnx officer<|||||>Hi @RensDimmendaal, Thanks for reporting this πŸ‘. Can you share the shape of the input you're feeding to the ONNX model? <|||||>Thanks for investigating! ```python for k,v in inputs_onnx.items(): print(f"{k}: shape: {v.shape}") ``` ``` >>> input_ids: shape: (1, 10) token_type_ids: shape: (1, 10) attention_mask: shape: (1, 10) ``` Source: https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH#scrollTo=64GodG5fKb0m&line=2&uniqifier=1<|||||>Interesting I'm not able to reproduce on my side (_see at the end_). Can you try restarting the Colab kernel? _(just to make sure the path are correctly updated)_. Let us know if it change something, if not I'll dig further on a custom colab. ```python >>> import onnxruntime as ort >>> from transformers import BertTokenizerFast >>> session = ort.InferenceSession("onnx/bert-base-cased.onnx") >>> tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased") >>> onnx_in = tokenizer.encode_plus("S E Q W E N C E", return_tensors="pt") >>> inputs_onnx = {k: v.cpu().detach().numpy() for k, v in onnx_in.items()} >>> sequence, pooled = session.run(None, inputs_onnx) >>> sequence.shape (1, 10, 768) >>> pooled.shape (1, 768) ```<|||||>I've done a restart and run all and the problem persists. i ran your code too, and it gives the following error: ``` --------------------------------------------------------------------------- Fail Traceback (most recent call last) <ipython-input-13-6776c93b3fb0> in <module>() 18 19 inputs_onnx = {k: v.cpu().detach().numpy() for k, v in onnx_in.items()} ---> 20 sequence, pooled = session.run(None, inputs_onnx) 21 22 print(sequence.shape, pooled.shape) /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/session.py in run(self, output_names, input_feed, run_options) 109 output_names = [output.name for output in self._outputs_meta] 110 try: --> 111 return self._sess.run(output_names, input_feed, run_options) 112 except C.EPFail as err: 113 if self._enable_fallback: Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Attention node. Name:'Attention_1' Status Message: CUBLAS error executing cublasGemmHelper( cublas, CUBLAS_OP_N, CUBLAS_OP_N, n, m, 1, &one, reinterpret_cast<const CudaT*>(bias->template Data<T>()), n, GetConstOnes<CudaT>(m), 1, &zero, reinterpret_cast<CudaT*>(gemm_buffer.get()), n, device_prop) ``` source: https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH#scrollTo=dXeNg37RTxl_&line=9&uniqifier=1<|||||>Ok , thanks for checking @RensDimmendaal . I'll do some experiments on a fresh notebook and post update here πŸ‘ <|||||>@mfuntowicz, I run the [notebook](https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH#scrollTo=64GodG5fKb0m&line=2&uniqifier=1) in my local machine, and look at the onnx model after export (and before optimization). I found that the exported onnx model has switched the position of "attention_mask" with "token_type_ids": ![image](https://user-images.githubusercontent.com/30328909/82973767-20ef9a00-9f8d-11ea-93d1-0876a65ef824.png) The above is a snapshot of embedding layer in exported graph. The "attention_mask" in the graph shall be named as "token_type_ids" since it is used to look up segment embeddings.<|||||>Hi, how can I use it for my trained text classifier?<|||||>The onnx export script has assumption of order of inputs. If the class you used does not have same order (or there are other parameters in between), you can wrap a class to use the expected order for export like: ``` class MyBertModel(BertForMaskedLM): def __init__(self, config): super().__init__(config) def forward(self, input_ids, token_type_ids, attention_mask): return super().forward(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids) model = MyBertModel(config) model.save_pretrained("./my_bert") ``` In this way, the exported model will have correct inputs.<|||||>Thanks for checking tianleiwu! (quick question in between: how do you make that plot of the onnx exported model?) It does not solve the issue for me though. The error message remains the same. I've added your code here: https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH#scrollTo=gXQ_JorGpAdI&line=1&uniqifier=1 However, changing this in my inputs during inference did the trick: ```python # CHANGE: SHUFFLE INPUTS inputs_onnx = { 'input_ids': inputs_onnx['input_ids'], 'attention_mask': inputs_onnx['token_type_ids'], 'token_type_ids': inputs_onnx['attention_mask'], } # Run the model (None = get all the outputs) sequence, pooled = cpu_model.run(None, inputs_onnx) ``` ``` >>> Sequence output: (1, 10, 768), Pooled output: (1, 768) ``` To me this seems like a bug that could be solved by having `transformers.convert_graph_to_onnx.ensure_valid_input` also return the reordered input_names. Something like this: ```python def ensure_valid_input(model, tokens, input_names): """ Ensure input are presented in the correct order, without any None Args: model: The model used to forward the input data tokens: BatchEncoding holding the input data input_names: The name of the inputs Returns: Tuple """ model_args_name = model.forward.__code__.co_varnames model_args_pos = [(model_args_name.index(name) - 1, name) for name in input_names] model_args = [None] * (max(map(lambda x: x[0], model_args_pos)) + 1) ordered_input_names = [None] * len(model_args) # new for arg_pos, arg_name in model_args_pos: model_args[arg_pos] = tokens[arg_name] ordered_input_names[arg_pos] = arg_name # new model_args = tuple(takewhile(lambda arg: arg is not None, model_args)) # Need to be ordered return ordered_input_names, model_args # new ``` However, based on the test for this function it seems that it is also used for GPT2, and I don't know if this change will break anythinig for that model (test_onnx.py line 111 and 112). Happy to submit a PR if this seeems like the way to go. <|||||>@tianleiwu @RensDimmendaal I'll have a look on this asap. The export should not permute inputs.<|||||>@RensDimmendaal I think your suggestion is the way to go, do you mind submitting a PR and assigning me as a reviewer ? πŸ‘ <|||||>Thanks @RensDimmendaal for submitting the PR, I'm closing this for now πŸ‘. Don't hesitate to reopen / create a new issue if you ran into any problem!
transformers
4,522
closed
Added huseinzol05/t5-small-bahasa-cased README.md
05-22-2020 13:53:34
05-22-2020 13:53:34
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=h1) Report > Merging [#4522](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bd6e3018322766b3a71ae6675552607923c02636&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4522/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4522 +/- ## ======================================= Coverage 77.85% 77.85% ======================================= Files 123 123 Lines 20551 20551 ======================================= + Hits 15999 16001 +2 + Misses 4552 4550 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=footer). Last update [bd6e301...d1e772c](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,521
closed
Using DistillBert to train Bert (run_languge_modeling.py) for some languge from scratch
# ❓ Questions & Help Is it a right thing to use 'distilbert-base-cased' for training that with more data for some language ( which exists in the list of multilingual languages , and doesn't have a separate model still) Will we reach good results as good as doing that with 6 layer BERT ? Is there any differences between 6 layer BERT and distillBert (to use as the model for run_languge_modeling.py)
05-22-2020 13:11:20
05-22-2020 13:11:20
I guess you can, since the architecture of distilbert is more or less the same as BERT (just half the layers), but I would not expect great performance. The power of distilling lies (next to the training objective) in having a teacher model (e.g.. a full BERT model) and initializing the distilled student network with weights from the teacher. If you don't do that it might not be easy to have good initialisation. In addition, the triple training loss would not make sense then since you have no teacher predictions to compare your distilled model with. I can see that the language_modeling script allows to use distilbert, but I would assume that is intended for fine-tuning rather than pre-training. cc @VictorSanh <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,520
closed
How to use its own custom Optimizer (GLUE Example)
# ❓ Questions & Help I am referring to the example on GLUE text-classification/run_glue.py. I would like to change the default optimizer (which I believe it to be ADAMW) to my own one. Should be in the imported Trainer or TrainingArguments, but I have not found an example doing so. Is it possible with any optimizer? (as long as they are written as a Torch Optimizer of course) Thanks a lot!
05-22-2020 10:28:13
05-22-2020 10:28:13
Hi, the Trainer takes an optional `optimizers` arg which is a two-tuple of (optimizer, scheduler): https://github.com/huggingface/transformers/blob/95a26fcf2d8d7072e4e63129cea8605f756bba1d/src/transformers/trainer.py#L152-L181<|||||>I created the optimizers using the same way. But the model is not getting trained, because the training loss is not decreasing with time. import transformers grouped_params = model.parameters() optimizer=transformers.AdamW(grouped_params, lr=0.00025) scheduler=transformers.get_cosine_schedule_with_warmup(optimizer=optimizer, num_warmup_steps=2000, num_training_steps=60000) optimizers = optimizer, scheduler training_args = TrainingArguments( output_dir="./test_checkpoint", overwrite_output_dir=True, num_train_epochs=15, per_device_train_batch_size=8, save_steps=1000, save_total_limit=3, logging_steps=50, dataloader_drop_last=True, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True, optimizers=optimizers )<|||||>Same issue here. Any solution?<|||||>I have the same issue as well. <|||||>> I have the same issue as well. Hey Chris, I'd like to know how you actually found out that you cannot pass the custom optimizer to Trainer? In my case, I create custom optim and lr scheduler by: ```python training_steps = training_step_calc( # self-defined func encoded_dataset['train'], PER_DEVICE_TRAIN_BATCH_SIZE, gpu_count, NUM_TRAIN_EPOCHS ) warmup_steps = (training_steps * WARMUP_RATIO) optimizer = bnb.optim.AdamW8bit( model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY, ) scheduler = transformers.get_cosine_schedule_with_warmup( optimizer, num_warmup_steps=warmup_steps, num_training_steps=training_steps, ``` Then I didn't specify the optim-related args in `TrainingArguments()`: ```python training_args = TrainingArguments( output_dir=SAVE_PATH, # basic hp num_train_epochs=NUM_TRAIN_EPOCHS, # auto_find_batch_size=True, per_device_train_batch_size=PER_DEVICE_TRAIN_BATCH_SIZE, per_device_eval_batch_size=PER_DEVICE_EVAL_BATCH_SIZE, gradient_checkpointing=GRADIENT_CHECKPOINTING, # optim-related, comment if use a custom optimiser # optim="adamw_hf" if OPTIM is None else OPTIM, # learning_rate=LEARNING_RATE, # weight_decay=WEIGHT_DECAY, # lr_scheduler_type=LR_SCHEDULER_TYPE, # warmup_ratio=WARMUP_RATIO, # data related data_seed=DATA_SEED, dataloader_num_workers=DATALOADER_NUM_WORKERS, ``` After passing all the parameters to `Trainer()`, it ended up with this: ```python trainer = Trainer( model = model, tokenizer = tokenizer, args = training_args, train_dataset = encoded_dataset["train"], eval_dataset = encoded_dataset["test"], data_collator = data_collator, optimizers=(optimizer, scheduler), compute_metrics = compute_metrics, ) ``` When I check the `trainer.args`, the optim in the args seems to be the default, and so it's shown on wandb run page. But the `trainer.optimizer` is shown as: ```python AdamW8bit ( Parameter Group 0 betas: (0.9, 0.999) eps: 1e-08 initial_lr: 7e-06 lr: 0.0 weight_decay: 0.01 ) ``` In fact, by manipulating the optimizer settings of the Trainer, even though the default adamw_hf optimizer is still displayed in the args in wandb and trainer, the optimizer is overridden by the custom optimizer and scheduler at training time.
transformers
4,519
closed
Specify device in DataCollator
By setting `device` parameter in `DataCollator` we're able to allocate tensors directly on the right device at tensor creation and avoid moving data around afterwards. This effectively avoid some snippet like this in the Trainer: ```python for k, v in some_dict.items(): v.to(device) ```
05-22-2020 10:21:11
05-22-2020 10:21:11
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=h1) Report > Merging [#4519](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a08652772791fdaeed6f263b1a99926ca64be5dc&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `81.81%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4519/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4519 +/- ## ========================================== - Coverage 77.83% 77.82% -0.02% ========================================== Files 123 123 Lines 20514 20513 -1 ========================================== - Hits 15968 15964 -4 - Misses 4546 4549 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/4519/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.70% <80.00%> (+0.47%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4519/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.37% <100.00%> (-0.11%)` | :arrow_down: | | [src/transformers/hf\_api.py](https://codecov.io/gh/huggingface/transformers/pull/4519/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `93.06% <0.00%> (-4.96%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=footer). Last update [a086527...7f579cb](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@mfuntowicz Perhaps it's useful to pull this through to the encode methods of the tokenizers so that you can pass a device and if return_tensors is used, the tensors are automatically pushed to the correct device? <|||||>I tend to agree we need to have a device argument on the tokenizer `encode_like` methods, that would remove the need to iterate over the items to relocate on GPU/TPU. Is there a common scheme we can use to do this on both Pytorch & TensorFlow (_I'm not very familiar with TensorFlow_)? I suspect strings might be the easiest way to handle this: - PyTorch: `device="cpu"` || `device="cuda:0"` - TensorFlow: `device="/device:cpu"` || `device="/device:gpu:0"` I need to test things out futher here to see what the API may looks like! πŸ‘ Let's see what the other think of the propal for the Trainer and we can follow up on a dedicated PR asap.<|||||>> I tend to agree we need to have a device argument on the tokenizer `encode_like` methods, that would remove the need to iterate over the items to relocate on GPU/TPU. > > Is there a common scheme we can use to do this on both Pytorch & TensorFlow (_I'm not very familiar with TensorFlow_)? > > I suspect strings might be the easiest way to handle this: > > * PyTorch: `device="cpu"` || `device="cuda:0"` > * TensorFlow: `device="/device:cpu"` || `device="/device:gpu:0"` > > I need to test things out futher here to see what the API may looks like! πŸ‘ > Let's see what the other think of the propal for the Trainer and we can follow up on a dedicated PR asap. Sorry, I also don't have much experience with TF so I can't really chip in on that. I guess the encode methods can accept a `device=` property that can be of type `Union[str, int, tf.device, torch.device]`. The device constructor (pt or tf) can depend on the already available `return_tensors` part which can be None, pt, or tf. - `str`: use it to initialize the `torch.device(str)` or `tf.device(str))` - allows the users with a lot of freedom in case they want do something like `tf.device('/job:bar/task:0/device:gpu:2')` (example from [docs](https://www.tensorflow.org/api_docs/python/tf/device)) - `int`: assume this is a GPU device id: `torch.device(f"cuda:{int}")` or `torch.device(f"/device:gpu:{int}")` - a `device`: use the device<|||||>@mfuntowicz is the performance improvement of this change significant? How can we measure it?<|||||>Let me run some epochs to report quantitative numbers<|||||>Perhaps useful discussions here: https://stackoverflow.com/questions/28597014/python-why-is-accessing-instance-attribute-is-slower-than-local In 100 calls to the variable (self or local) the answer reports 3 seconds difference. Not sure if that is worth it. I'm all for speed improvements but I'm also really fond of readability. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>cc @LysandreJik <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Should this still be implemented, or is this PR superseded by another one? @mfuntowicz @julien-c <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Bump @julien-c @mfuntowicz <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Should I keep bumping this? @sgugger <|||||>I don't think it's useful: the Trainer handles it already and for people that want to run their custom training loop, Accelerate can handle that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
4,518
closed
[marian] possible memory leak problem while translating & extracting internal representations
# πŸ› Bug ## Information I am extracting the internal representations of some of the Marian models. There seems to be a memory leak problem. In this issue, you will find code for running the model sentence by sentence (bsz = 1) just to keep it simple. When I use batching, the problem persists and arises earlier. Model I am using: MarianMT ` modelnames=[f'Helsinki-NLP/opus-mt-en-{tgt}' for tgt in ['de', 'fr', 'ee', 'sv', 'el', 'fi', 'cs', 'ru' ]]` Language I am using the model on: en-{tgt] The problem arises when using: * [ ] a mix of official example scripts and my own: on this code, I keep the lines used to see if it is a memory problem. Hence the `empty_cache()`, keeping track of the memory usage with `memory_stats()`, and passing things to 'cpu' (but this has not solved the problem for me) ``` import torch import transformers config_overrider={'output_attentions':True, 'output_hidden_states':True} model = transformers.MarianMTModel.from_pretrained(modelname, **config_overrider) tokenizer = transformers.MarianTokenizer.from_pretrained(modelname) model.eval() encoded_sentences = [] memdict=[] for sent in tqdm(sentences): tokdsent = self.tokenizer.prepare_translation_batch(src_texts=[' '.join(sent)]) tokdsent = {k:v.to(self.device) for k,v in tokdsent.items()} model_outputs = self.model.forward(**tokdsent) encoded_sentences.append( [x.to('cpu') for x in model_outputs[4]+model_outputs[1]] ) torch.cuda.empty_cache() memdict.append(torch.cuda.memory_stats(self.device)) print(memdict[-1]['active.all.current'],memdict[-1]['active.all.peak']) # comment out this part ``` The tasks I am working on is: * [ ] Using a dataset from an official task: semantic textual similarity - STS 2012, 2013, 2014, 2015 and 2016 (can use [this one](https://github.com/Helsinki-NLP/Geometry/blob/refactorize/data/STS/allSTS.txt)) ## To reproduce Steps to reproduce the behavior: 1. Load and tokenize the sentences (need this for what I am doing, even when I detok when passing it to the tokenizer) ``` STS_path = "path/to/allSTS.txt" with open(STS_path, 'r') as f: samples = f.readlines() sentences = [] for sent in samples: sent = sent.strip() sent = re.findall(r'[\w]+|\.|,|\?|\!|;|:|\'|\(|\)|/',sent) sentences.append(sent) ``` 2. Run the code above. The one on _"The problem arises when using:"_ 3. For me, around line 3750 I get OOM: ``` RuntimeError('CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 31.75 GiB total capacity; 30.67 GiB already allocated; 17.69 MiB free; 30.67 GiB reserved in total by PyTorch)') ``` Here I copy some of the linesprinted from `active.all.current ` and `active.all.peak` (it never changes the upwards trend): ``` 533 535 779 811 1025 1057 1271 1303 1517 1549 1763 1795 2009 2041 2255 2287 2501 2533 2747 2779 2993 3025 ... 9635 9667 9881 9913 10127 10159 10373 10405 10619 10651 ... 921311 921343 921557 921589 921803 921835 922049 922081 922295 922327 922541 922573 ``` ^-- these are the first 10 lines, somewhere around 40 sentences, and the last lines before running out of mem - close to 3750 sents. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I would expect the memory on the cuda device to be freed after every iteration since I am overwriting the variable there and what I append to the list I want to keep is sent to cpu. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Linux 3.10.0-1062.7.1.el7.x86_64 x86_64, Red Hat Enterprise Linux Server 7.7 (Maipo) - Python version: 3.7.3 - PyTorch version (GPU?): 1.5.0 for cuda 10.2 (Nvidia Volta V100 GPU with 32 GB of memory) - Tensorflow version (GPU?): not using tf - Using GPU in script?: yes (but I have seen the same problem on CPU) - Using distributed or parallel set-up in script?: no
05-22-2020 10:08:06
05-22-2020 10:08:06
ooook.... this is embarrassing. I just realized that I had to detach the variable, so GPU memory could be freed. This does the trick: ``` encoded_sentences.append( [x.detach().to('cpu') for x in model_outputs[4]+model_outputs[1]] ) ``` sorry for the trouble ;) and thanks for the repo and all your hard work
transformers
4,517
closed
How to train a custom seq2seq model with BertModel
How to train a custom seq2seq model with `BertModel`, I would like to use some Chinese pretrained model base on `BertModel` so I've tried using `Encoder-Decoder Model`, but it seems the`Encoder-Decoder Model` is not used for conditional text generation and I saw that BartModel seems to be the model I need, but I cannot load pretrained BertModel weight with BartModel. by the way, could I finetune a BartModel for seq2seq with custom data ? any suggestion, thanks
05-22-2020 10:02:43
05-22-2020 10:02:43
Hi @chenjunweii - thanks for your issue! I will take a deeper look at the EncoderDecoder framework at the end of this week and should add a google colab on how to fine-tune it.<|||||>Using Bert - Bert model for seq2seq task should work using simpletransformers library, there is an working code. But there is one strange thing that the saved models loads wrong weight's. Predicting the same string multiple times works correctly, loading the model each time again it's generating a new result every time @patrickvonplaten <|||||>Hi @flozi00, could you add a code snippet here that reproduces this bug?<|||||>Of course, it should be reproduceable using this code: ```python import logging import pandas as pd from simpletransformers.seq2seq import Seq2SeqModel logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger("transformers") transformers_logger.setLevel(logging.WARNING) train_data = [ ["one", "1"], ["two", "2"], ] train_df = pd.DataFrame(train_data, columns=["input_text", "target_text"]) eval_data = [ ["three", "3"], ["four", "4"], ] eval_df = pd.DataFrame(eval_data, columns=["input_text", "target_text"]) model_args = { "reprocess_input_data": True, "overwrite_output_dir": True, "max_seq_length": 10, "train_batch_size": 2, "num_train_epochs": 10, "save_eval_checkpoints": False, "save_model_every_epoch": False, "evaluate_generated_text": True, "evaluate_during_training_verbose": True, "use_multiprocessing": False, "max_length": 15, "manual_seed": 4, } encoder_type = "roberta" model = Seq2SeqModel( encoder_type, "roberta-base", "bert-base-cased", args=model_args, use_cuda=True, ) model.train_model(train_df) results = model.eval_model(eval_df) print(model.predict(["five"])) model1 = Seq2SeqModel( encoder_type, encoder_decoder_name="outputs", args=model_args, use_cuda=True, ) print(model1.predict(["five"]) ``` It the sample code in documentation of simpletransformers library. The dataset size doesn't matter. https://github.com/ThilinaRajapakse/simpletransformers/blob/master/README.md#encoder-decoder<|||||>Hey @flozi00, I think #4680 fixes the error. @chenjunweii - a Bert2Bert model using the `EncoderDecoder` framework should be the right approach here! You can use one `Bert` model as an encoder and the other `Bert` model as a decoder. You will have to fine-tune the `EncoderDecoder` model a bit, but it should work fine! You can load the model via: ```python from transformers import EncoderDecoder model = EncoderDecoder.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert ``` and train it on conditional language text generation providing the `input_ids` as context, the `decoder_input_ids` as the text to generate and `lm_labels` as your shifted text to generate. Think of it as `decoder_input_ids` and `lm_labels` being your normal inputs for causal text generation inputs and `input_ids` as your context to condition the model on. I will soon provide a notebook that makes this clearer.<|||||>Thank you for working on this problem and thank you for πŸ€— ! It looks like it is finally possible to write seq2seq models in under 10 lines of code, yay! But I still have some questions and concerns about the `EncoderDecoder`. 1. It is not clear now, how masking now works in the decoder implementation. I spent quite some time to get into it. Documentation says that "Causal mask will also be used by default", but I did not find how to change it. E.g. what if I am training model without teacher forcing (just generating words one by one during training) or if I am doing inference? I would suggest to add one more argument to the forward that would make it both more clear when causal masking is used and how to enable/disable it. What do you think? 2. It is not clear what is the default decoder class. It just feels weird to use BERT as a decoder. BERT is a mode that is a) non-autoregressive b) pre-trained without cross-attention modules. It is also unclear at which point the cross-attention modules are created. It would be great, if it is possible, to add something like `TransformerDecoder` model. <|||||>Hey @Guitaricet :-) , First, at the moment only Bert2Bert works with the encoder-decoder framework. Also, if you use Bert as a decoder you will always use a causal mask. At the moment I cannot think of an encoder-decoder in which the decoder does not use a causal mask, so I don't see a reason why one would want to disable it. Can you give me an example where the decoder should not have a causal mask? Do you mean auto-regressive language generation by "generating words one by one"? Auto-regressive language modeling always requires a causal mask... 2. Currently, only Bert works as a decoder. We might add GPT2 in a couple of weeks. Note that no model has `cross-attention` layers if it is not already an encoder-decoder model (like Bart or T5) and in this case it does not make sense to use the encoder-decoder wrapper. The model is initialized with random weights for the cross attention layers which will have to be fine-tuned. I agree, that this should be made clearer in the documentation! <|||||>I'm trying to build a Bert2Bert model using EncoderDecoder, but I have a couple quick questions regarding the format of inputs and targets for the BERT decoder. What exactly is a good way to format the conditional mask to the decoder. For example, if I want to feed the decoder [I, am] and make it output [I, am, happy], how exactly do I mask the input? Do I give the decoder [CLS, I, am, MASK, ...., MASK, SEP] where the number of MASKs is such that the total number of tokens is a fixed length (like 512)? Or do I just input [CLS, I, am, MASK, SEP, PAD, ..., PAD]? Similarly, what should the decoder's output be? Does the first token (the "output" of CLS) be the token "I"? Lastly, is there a website or resource that explains the input and output representations of text given to the decoder in Bert2Bert? I don't think the authors of the paper have released their code yet. Thanks! <|||||>I will soon release a bert2bert notebook that will show how to do this. You can also take a look at this: https://github.com/huggingface/transformers/issues/4647 Maybe it helps.<|||||>Thank you @patrickvonplaten for clarification 1. I see why not using a causal mask seems weird and I agree with you. I can think of two reasons not to use a causal mask for generation: 1) inference: you don't have any future to look into, thus the mask is not strictly needed (you won't be able to cache the decoder states though) 2) you can train a model without [teacher forcing](https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/), i.e. during training forwarding your decoder tgt_len times only using the words that has been predicted by the model instead of feeding the ground truth. It is very possible that both of these cases are rare, so the library may not need `causal_masking` argument, but at least some clarification may be needed. This is the reason why I found this issue in the first place. 2. Yes, improving the documentation would help a lot! Still, I would argue that a designated `Decoder` class is a much more clear way if you want to train it from scratch. I also noticed that `config.is_decoder` option is only documented in BertModel and not in `BertConfig` class. Adding it would help a lot. (I only found it because I thought that it is not documented at all and wanted to check my claim via searching for "is_decoder" in the source code) Again, thank you for you work, πŸ€— is what NLP community needed for quite some time! **UPD:** more reasons to use a different attention mask (not for seq2seq though) XLNet-like or ULM-like pre-training<|||||>> I will soon release a bert2bert notebook that will show how to do this. You can also take a look at this: > #4647 > > Maybe it helps. Hi @patrickvonplaten , Thanks for the clarification on this topic and for the great work you've been doing on those seq2seq models. Is this notebook you mentioned here already available? Thanks.<|||||>Yeah, the code is ready in this PR: https://github.com/huggingface/transformers/tree/more_general_trainer_metric . The script to train an Encoder-Decoder model can be assessed here: https://github.com/huggingface/transformers/blob/more_general_trainer_metric/src/transformers/bert_encoder_decoder_summary.py And in order for the script to work, you need to use this Trainer class: https://github.com/huggingface/transformers/blob/more_general_trainer_metric/src/transformers/trainer.py I'm currently training the model myself. When the results are decent, I will publish a little notebook.<|||||>Hi @patrickvonplaten, thanks for sharing the scripts. However, the second link for training an encoder-decoder model is not found. Could you please upload this script? Thanks.<|||||>You <|||||>Sorry, I deleted the second link. You can see all the necessary code on this model page: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#bert2bert-summarization-with-%F0%9F%A4%97-encoderdecoder-framework<|||||>Thanks for sharing this, Patrick. <|||||>I am trying to implement a encoder decoder with BART but I have no idea how to do so, and I need to fine tune the decoder model, so eventually I need to train my decoder model. I am trying to use the `EncoderDecoder` model in my script but I don't know how to access the decoder model for training it. Instead of using the module, I initialized `BartModel` as encoder,whereas for decoder I used `BartForConditionalGeneration`. Here's the model I initialized ``` encoder = BartModel.from_pretrained('facebook/bart-base) decoder = BartForConditionalGeneration.from_pretrained('facebook/bart-base) ``` And here's how I am using it. ``` for epoch in range(epochs): #------------------------training------------------------ decoder.train() losses = 0 times = 0 print('\n'+'-'*20 + f'epoch {epoch}' + '-'*20) for batch in tqdm(train_dataloader): batch = [item.to(device) for item in batch] encoder_input, decoder_input, mask_encoder_input, mask_decoder_input = batch lhs,hs,att,_,_,_ = encoder(input_ids = encoder_input, attention_mask = mask_encoder_input,output_attentions = True,output_hidden_states = True) past = (lhs,hs,att) logits,_,_,_= decoder(input_ids = decoder_input, attention_mask = mask_decoder_input, encoder_outputs = past) out = logits[:, :-1].contiguous() target = decoder_input[:, 1:].contiguous() target_mask = mask_decoder_input[:, 1:].contiguous() loss = util.sequence_cross_entropy_with_logits(out, target, target_mask, average="token") loss.backward() losses += loss.item() times += 1 update_count += 1 if update_count % num_gradients_accumulation == num_gradients_accumulation - 1: optimizer.step() scheduler.step() optimizer.zero_grad() ``` I am calculating perplexity from the loss, and I am getting a perplexity score of 1000+, which is bad. I would like to know whats my model is lacking and is it possible that I could use `EncoderDecoder` module<|||||>@AmbiTyga from what I know, BART is already a encoder-decoder model, with a BERT as a encoder and a GPT as a decoder. So you are encoding-decoding in encoder and encoding-decoding in decoder, which I don t think is a good idea. For the moment EncoderDecoderModel supports only BERT.<|||||>@iliemihai So can you refer me how to use BART in such cases like I have coded above?<|||||>@patrickvonplaten is Bert the only model that is supported as a decoder? I was hoping to train a universal model so wanted to use xlm-roberta (xlmr) as both encoder and decoder; Is this possible given the current EncoderDecoder framework? I know bert has a multilingual checkpoint but performance-wise an xlm-roberta model should be better. I noticed the notebook https://github.com/huggingface/transformers/blob/16e38940bd7d2345afc82df11706ee9b16aa9d28/model_cards/patrickvonplaten/roberta2roberta-share-cnn_dailymail-fp16/README.md does roberta2roberta; is this same code applicable to xlm-roberta? I tried following the same template with xlmr but I noticed that the output is the same regardless of the input - the is_decoder flag is properly set to True in the decoder but this issue persists.<|||||>Hey @spookypineapple - good question! Here is the PR that adds XLM-Roberta to the EncoderDecoder models: https://github.com/huggingface/transformers/pull/6878 will not make it to 3.1.0 but should be available on master in ~1,2 days<|||||>Im pulling from master so I should get at least the neccessary code artifacts to get bert2bert to work. However Im seeing (for a bert2bert setup using bert-base-multilingual-cased) that the output of the decoder remains unchanged regardless of the input to the encoder; this behavior seems to persist with training... The code im using to initialize the EncoderDecoder model is as follows: ``` import torch from transformers import ( MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING, AdamW, get_linear_schedule_with_warmup, AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM, EncoderDecoderModel ) model_type = 'bert' model_name = config_name = tokenizer_name = "bert-base-multilingual-cased" tokenizer = AutoTokenizer.from_pretrained( tokenizer_name, do_lower_case=False, cache_dir=None, force_download=False ) config = AutoConfig.from_pretrained( config_name, cache_dir=None, force_download=False ) model = EncoderDecoderModel.from_encoder_decoder_pretrained( model_name, # encoder model_name, # decoder from_tf=bool(".ckpt" in model_name), config=config, cache_dir=None, ) if model_type in ['bert']: tokenizer.bos_token = tokenizer.cls_token tokenizer.eos_token = tokenizer.sep_token model.config.decoder_start_token_id = tokenizer.bos_token_id model.config.eos_token_id = tokenizer.eos_token_id model.tie_weights() model.decoder.config.use_cache = False input_str1 = "this is the first example" input_str2 = "and heres another example for you" input_encodings1 = tokenizer.encode_plus(input_str1, padding="max_length", truncation=True, max_length=512, return_tensors="pt") input_encodings2 = tokenizer.encode_plus(input_str2, padding="max_length", truncation=True, max_length=512, return_tensors="pt") gen1 = model.generate(input_ids=input_encodings1.input_ids, attention_mask=input_encodings1.attention_mask, max_length=25, decoder_start_token_id=model.config.decoder_start_token_id ) gen2 = model.generate(input_ids=input_encodings2.input_ids, attention_mask=input_encodings2.attention_mask, max_length=25, decoder_start_token_id=model.config.decoder_start_token_id ) dec1 = [tokenizer.decode(ids, skip_special_tokens=True) for ids in gen1] dec2 = [tokenizer.decode(ids, skip_special_tokens=True) for ids in gen2] print(dec1) print(dec2) # the outputs are identical even though the inputs are different ``` <|||||>Hey @spookypineapple, A couple of things regarding your code: 1) `.from_encoder_decoder_pretrained()` usually does not need a config. The way you use this function with a `conifg` inserted means that you are overwriting the encoder config, which is not recommended when loading an encoder decoder model from two pretrained "bert-base-multilingual-cased" checkpoints. Also `from_tf` will also only apply to the encoder. You would additionally have to pass `decoder_from_tf`. 2) An encoder decoder model initialized from two pretrained "bert-base-multilingual-cased" checkpoints needs to be fine-tuned before any meaningful results can be seen. => You might want to check these model cards of bert2bert which explain how to fine-tune such an encoder decoder model: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16 Hope this helps! <|||||>> Hey @spookypineapple, > > A couple of things regarding your code: > > 1. `.from_encoder_decoder_pretrained()` usually does not need a config. The way you use this function with a `conifg` inserted means that you are overwriting the encoder config, which is not recommended when loading an encoder decoder model from two pretrained "bert-base-multilingual-cased" checkpoints. Also `from_tf` will also only apply to the encoder. You would additionally have to pass `decoder_from_tf`. > 2. An encoder decoder model initialized from two pretrained "bert-base-multilingual-cased" checkpoints needs to be fine-tuned before any meaningful results can be seen. > > => You might want to check these model cards of bert2bert which explain how to fine-tune such an encoder decoder model: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16 > > Hope this helps! It does help indeed! Thankyou @patrickvonplaten <|||||>@patrickvonplaten can you please share a tutorial/notebook on training the encoder-decoder model for machine translation? <|||||>@patrickvonplaten can you create a notebook on how to use custom dataset to fine tune bert2bert models ? <|||||>> Hey @Guitaricet :-) , > > First, at the moment only Bert2Bert works with the encoder-decoder framework. Also, if you use Bert as a decoder you will always use a causal mask. At the moment I cannot think of an encoder-decoder in which the decoder does not use a causal mask, so I don't see a reason why one would want to disable it. Can you give me an example where the decoder should not have a causal mask? > Do you mean auto-regressive language generation by "generating words one by one"? Auto-regressive language modeling always requires a causal mask... > > 1. Currently, only Bert works as a decoder. We might add GPT2 in a couple of weeks. Note that no model has `cross-attention` layers if it is not already an encoder-decoder model (like Bart or T5) and in this case it does not make sense to use the encoder-decoder wrapper. The model is initialized with random weights for the cross attention layers which will have to be fine-tuned. I agree, that this should be made clearer in the documentation! I would like to disable causal masking to use it in [DETR](https://arxiv.org/abs/2005.12872), which uses parallel decoding... But this not seem possible at the moment. In my opinion, an option to disable causal masking in the decoder would be useful<|||||>> Yeah, the code is ready in this PR: https://github.com/huggingface/transformers/tree/more_general_trainer_metric . The script to train an Encoder-Decoder model can be assessed here: https://github.com/huggingface/transformers/blob/more_general_trainer_metric/src/transformers/bert_encoder_decoder_summary.py > > And in order for the script to work, you need to use this Trainer class: https://github.com/huggingface/transformers/blob/more_general_trainer_metric/src/transformers/trainer.py > > I'm currently training the model myself. When the results are decent, I will publish a little notebook. @patrickvonplaten , none of the links is working. Is it possible to fix them? <|||||>For BERT2BERT you can just use the `EncoderDecoderModel` class as shown here: https://huggingface.co/docs/transformers/v4.21.3/en/model_doc/encoder-decoder#transformers.EncoderDecoderModel.forward.example This example shows how to instantiate a Bert2Bert model which you can then train on any seq2seq task you want, e.g. summarization: https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization (you just need to slighly adapt the example, or pre-create a BERT2BERT and use it as a checkpoint)<|||||>Thanks! Btw, I just submitted an issue and tagged you. There's some problem when using EncoderDecoderModel with the most recent transformers versions.
transformers
4,516
closed
sometimes loss starts with nan when running "Quick tour TF 2.0 training and PyTorch interoperability" script
# πŸ› Bug ## Information The problem arises when using: * [x] the official example scripts: (give details below) "Quick tour TF 2.0 training and PyTorch interoperability" ## To reproduce Steps to reproduce the behavior: 1.run the code: ``` import tensorflow as tf import tensorflow_datasets from transformers import * # Load dataset, tokenizer, model from pretrained model/vocabulary tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = TFBertForSequenceClassification.from_pretrained('bert-base-cased') data = tensorflow_datasets.load('glue/mrpc') # Prepare dataset for GLUE as a tf.data.Dataset instance train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc') valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc') train_dataset = train_dataset.shuffle(100).batch(32).repeat(10) valid_dataset = valid_dataset.batch(64) # Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=optimizer, loss=loss, metrics=[metric]) # Train and evaluate using tf.keras.Model.fit() history = model.fit(train_dataset, epochs=10, steps_per_epoch=115, validation_data=valid_dataset, validation_steps=7) ``` 2. sometimes the loss is nan and sometimes it works well: ### nan case: Train for 115 steps, validate for 7 steps Epoch 1/10 115/115 [==============================] - 863s 8s/step - loss: nan - accuracy: 0.3255 - val_loss: nan - val_accuracy: 0.3162 Epoch 2/10 115/115 [==============================] - 854s 7s/step - loss: nan - accuracy: 0.3255 - val_loss: nan - val_accuracy: 0.3162 ### normal case: Train for 115 steps, validate for 7 steps Epoch 1/10 27/115 [======>.......................] - ETA: 11:37 - loss: 0.6249 - accuracy: 0.6609 ## Environment info - `transformers` version: 2.9.1 - Platform:ubuntu 16.04 - Python version:3.6 - PyTorch version (GPU?):1.2.0 GPU - Tensorflow version (GPU?):2.0.0 gpu - Using GPU in script?:yes - Using distributed or parallel set-up in script?:no
05-22-2020 09:33:35
05-22-2020 09:33:35
I ran your script five times and I cannot reproduce this so it's very hard to debug. Can you clear the cache created by convert_examples and try again? And if that does not work, try updating tensorflow and tensorflow_datasets to the latest versions.<|||||>Thanks for your attention. As i said "sometimes", the script works well now. I did nothing except reboot my Spyder several times. I don’t know why, but it ’s back to normal<|||||>That's good to hear! It might have been a caching issue somewhere with Spyder. (Personally I can recommend PyCharm.)<|||||>Yes, I guess so. Thanks for your recommendation. I have been hesitating whether to abandon Spyder completely. PyCharm is powerful but a little too heavy...
transformers
4,515
closed
Allow BatchEncoding to be pickled
Overrides the methods ___get_state()___ & ___set_state()___ to (respectively) export the content of the underlying `data` dictionnary and - if defined - the content of `encodings`. Unittests added to covert the serialization & deserialization of all the exported properties. Block until new release of **tokenizers**
05-22-2020 09:21:07
05-22-2020 09:21:07
Issues seems related to pretokenized input. Checking with @n1t0 what changed in-between πŸ‘ <|||||>Indeed, I think we'll have to merge https://github.com/huggingface/transformers/pull/4510 before this.<|||||>Closing in favor of #5039
transformers
4,514
closed
❓ How Linear layer difference between TF2 and PT are handled ?
# ❓ Questions & Help There is a difference between TF2 and Pytorch on how to store the weights of a Linear Layer. As shown in [this Colab notebook](https://colab.research.google.com/drive/1zLWONO3wo09-PImo0kg1bwARsh2jKnpQ?usp=sharing), in order to get the same output for both TF2 and PT when using `torch.nn.Linear` and `tf.keras.layers.Dense`, we need to transpose the weights in PT. **I couldn't find in this library where this is handled** (when loading a Pytorch checkpoint to TF2 for example). Can someone point me out where and how this is handled ?
05-22-2020 07:46:46
05-22-2020 07:46:46
[Here's an example when loading a TF BERT model in PyTorch.](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L122)
transformers
4,513
closed
Couldn't reach server GPT-2
I have tried to use gpt2 using ubuntu and vagrant. This is the code: ` import torch from lm_scorer.models.auto import AutoLMScorer as LMScorer scorer = LMScorer.from_pretrained("gpt2") ` I get this error: > AH01215: OSError: Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json' to download pretrained model configuration file. It has worked before but I had to reset my virtual environment and now it no longer worked. I think it has something to do with apache configurations. Also, It works in the terminal but not in python script.
05-22-2020 07:24:02
05-22-2020 07:24:02
I also created this stack overflow question here: https://stackoverflow.com/questions/61944526/oserror-couldnt-reach-server-gpt2-config-json<|||||>It is odd that it works from CLI but not from within your script. That does not make a lot of sense. Can you try this? ```python scorer = LMScorer.from_pretrained("gpt2", force_download=True) ```<|||||>Thanks, Bram. I tried that code but I get the same error. I believe it has something to do with 1) apache2 or ubuntu config and what I've allowed it to connect to or 2) some download I am missing because it has worked previously. I tried downloading the gpt2 config, model.bin, and vocab file but I would either get the same error as above or get this error: > ValueError: Unrecognized model name.Can be one of: gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2:<|||||>If you are sure that you have manually downloaded all files to the correct folder, you can disable the online look-up. This is useful if you have, as you say, network restrictions. ```python scorer = LMScorer.from_pretrained("gpt2", local_files_only=True) ```<|||||>Still nothing yet. 1. I downloaded everything from https://huggingface.co/gpt2#list-files 2. Added files to the same directory the script is in 3. changed names (e.g. gpt2-config.json to config.json) Is there anything I am missing? <|||||>This is likely to be a problem with the LMScorer rather than with this transformers library. Looking t the source code, it does not pass they keyword arguments down to model init. I suggest that you make an issue over at the library that you used. https://github.com/simonepri/lm-scorer/blob/master/lm_scorer/models/gpt2.py<|||||>Still nothing. I believe it is my apache2 configurations for access but I haven't figure out how yet. <|||||>Closing this. See continuation here: https://github.com/simonepri/lm-scorer/issues/8#event-3386811426
transformers
4,512
closed
ValueError: TracedModules don't support parameter sharing between modules
# πŸ› Bug ## Information Language I am using the model on English: ## To reproduce Steps to reproduce the behavior: 1.run the "Quick tour" code: ``` import torch from transformers import * # Transformers has a unified API # for 10 transformer architectures and 30 pretrained weights. # Model | Tokenizer | Pretrained weights shortcut MODELS = [(BertModel, BertTokenizer, 'bert-base-uncased'), (OpenAIGPTModel, OpenAIGPTTokenizer, 'openai-gpt'), (GPT2Model, GPT2Tokenizer, 'gpt2'), (CTRLModel, CTRLTokenizer, 'ctrl'), (TransfoXLModel, TransfoXLTokenizer, 'transfo-xl-wt103'), (XLNetModel, XLNetTokenizer, 'xlnet-base-cased'), (XLMModel, XLMTokenizer, 'xlm-mlm-enfr-1024'), (DistilBertModel, DistilBertTokenizer, 'distilbert-base-cased'), (RobertaModel, RobertaTokenizer, 'roberta-base'), (XLMRobertaModel, XLMRobertaTokenizer, 'xlm-roberta-base'), ] # To use TensorFlow 2.0 versions of the models, simply prefix the class names with 'TF', e.g. `TFRobertaModel` is the TF 2.0 counterpart of the PyTorch model `RobertaModel` # Let's encode some text in a sequence of hidden-states using each model: for model_class, tokenizer_class, pretrained_weights in MODELS: # Load pretrained model/tokenizer tokenizer = tokenizer_class.from_pretrained(pretrained_weights) model = model_class.from_pretrained(pretrained_weights) # Encode text input_ids = torch.tensor([tokenizer.encode("Here is some text to encode", add_special_tokens=True)]) # Add special tokens takes care of adding [CLS], [SEP], <s>... tokens in the right way for each model. with torch.no_grad(): last_hidden_states = model(input_ids)[0] # Models outputs are now tuples # Each architecture is provided with several class for fine-tuning on down-stream tasks, e.g. BERT_MODEL_CLASSES = [BertModel, BertForPreTraining, BertForMaskedLM, BertForNextSentencePrediction, BertForSequenceClassification, BertForTokenClassification, BertForQuestionAnswering] # All the classes for an architecture can be initiated from pretrained weights for this architecture # Note that additional weights added for fine-tuning are only initialized # and need to be trained on the down-stream task pretrained_weights = 'bert-base-uncased' tokenizer = BertTokenizer.from_pretrained(pretrained_weights) for model_class in BERT_MODEL_CLASSES: # Load pretrained model/tokenizer model = model_class.from_pretrained(pretrained_weights) # Models can return full list of hidden-states & attentions weights at each layer model = model_class.from_pretrained(pretrained_weights, output_hidden_states=True, output_attentions=True) input_ids = torch.tensor([tokenizer.encode("Let's see all hidden-states and attentions on this text")]) all_hidden_states, all_attentions = model(input_ids)[-2:] # Models are compatible with Torchscript model = model_class.from_pretrained(pretrained_weights, torchscript=True) traced_model = torch.jit.trace(model, (input_ids,)) # Simple serialization for models and tokenizers model.save_pretrained('./directory/to/save/') # save model = model_class.from_pretrained('./directory/to/save/') # re-load tokenizer.save_pretrained('./directory/to/save/') # save tokenizer = BertTokenizer.from_pretrained('./directory/to/save/') # re-load # SOTA examples for GLUE, SQUAD, text generation... ``` 2. Encountered the bug: File "/home/**/anaconda3/envs/dl/lib/python3.6/site-packages/torch/jit/__init__.py", line 1860, in check_unique raise ValueError("TracedModules don't support parameter sharing between modules") ValueError: TracedModules don't support parameter sharing between modules ## Environment info - `transformers` version:2.9.1 - Platform: ubuntu 16.04 - Python version: 3.6 - PyTorch version (GPU?):1.2.0 GPU - Tensorflow version (GPU?):2.0.0 GPU - Using GPU in script?:yes - Using distributed or parallel set-up in script?:no
05-22-2020 07:06:09
05-22-2020 07:06:09
Can you give more information? This is too brief. Please post the full error that you get (also called error or stack trace) and _do not_ post it as a screenshot but use [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks) instead.<|||||>As I said: _use code blocks_ please. It is unclear what your comments are and what the code is. _Use those code blocks_ - it's super easy. Also, in your original post you used PyTorch, and now you post TF code. You can't torch.jit a TF model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,511
closed
AttributeError: 'SummaryWriter' object has no attribute 'add_hparams'
# πŸ› Bug ## Information Traceback (most recent call last): File "F:/Kaggle/Hug/Colab/main.py", line 105, in <module> trainer.train() File "c:\programdata\anaconda3\lib\site-packages\transformers\trainer.py", line 359, in train self.tb_writer.add_hparams(self.args.to_sanitized_dict(), metric_dict={}) AttributeError: 'SummaryWriter' object has no attribute 'add_hparams'
05-22-2020 02:25:26
05-22-2020 02:25:26
Please fill out the template. It is there for a reason. It isn't even clear whether you use your own scripts or ours. _Fill out the template._ See this question, which might help: https://github.com/lanpa/tensorboardX/issues/502<|||||>hello when i use `python run_language_modeling.py \ --output_dir=chinese_finetuned_lm \ --model_type=bert \ --model_name_or_path=bert-base-chinese \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE \ --mlm ` i find the same error `Traceback (most recent call last): File "run_language_modeling.py", line 281, in <module> main() File "run_language_modeling.py", line 245, in main trainer.train(model_path=model_path) File "/home/zhongqi/anaconda3/envs/transformers_bert/lib/python3.6/site-packages/transformers/trainer.py", line 418, in train self.tb_writer.add_hparams(self.args.to_sanitized_dict(), metric_dict={}) AttributeError: 'SummaryWriter' object has no attribute 'add_hparams'` how to deal with it? and my protobuf is 3.12.1<|||||>@Mozen Can you update to the latest transformers? Many things have changed - we now use a custom trainer class for the example scripts. Let me know whether that helps!<|||||>@BramVanroy i used is already the latest version, and my torch version is 1.1.0, Related to this?<|||||>> hello when i use > `python run_language_modeling.py \ --output_dir=chinese_finetuned_lm \ --model_type=bert \ --model_name_or_path=bert-base-chinese \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE \ --mlm ` > i find the same error > `Traceback (most recent call last): File "run_language_modeling.py", line 281, in <module> main() File "run_language_modeling.py", line 245, in main trainer.train(model_path=model_path) File "/home/zhongqi/anaconda3/envs/transformers_bert/lib/python3.6/site-packages/transformers/trainer.py", line 418, in train self.tb_writer.add_hparams(self.args.to_sanitized_dict(), metric_dict={}) AttributeError: 'SummaryWriter' object has no attribute 'add_hparams'` > how to deal with it? and my protobuf is 3.12.1 I found 'add_hparams'` only exsiting in torch >1.3.1, so I update the version of torch, the problem is solved! Moreover, when torch >1.3.1, you should update the version of cuda, at least >= cuda 9.2.<|||||>> @BramVanroy i used is already the latest version, and my torch version is 1.1.0, Related to this? should update torch >1.3.1<|||||>@zhuqunxi OK thanks<|||||>> @Mozen Can you update to the latest transformers? Many things have changed - we now use a custom trainer class for the example scripts. Let me know whether that helps! Thanks for helping me. I have fixed this problem by myself.<|||||>If you had followed the template, and posted all the requested information such as your environment, this would have been solved much more quickly.<|||||>> If you had followed the template, and posted all the requested information such as your environment, this would have been solved much more quickly. Awesome,thanks for your advice. I really need to learn how to ask questions.<|||||>> > @BramVanroy i used is already the latest version, and my torch version is 1.1.0, Related to this? > > should update torch >1.3.1 Upgrading `torch` should not be the ideal solution. The issue arises because of differences in `SummaryWriter` from `torch.utils.tensorboard` and `tensorboardX` in [transformers/trainer.py](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L46). With following environment: ``` protobuf 3.12.1 tensorboard 2.2.1 tensorboard-plugin-wit 1.6.0.post3 tensorboardX 2.0+022f060 torch 1.1.0 transformers 2.10.0 ``` it is easy to see: ``` >>> from tensorboardX import SummaryWriter as SummaryWriter_tbX >>> from torch.utils.tensorboard import SummaryWriter >>> >>> writer = SummaryWriter_tbX() >>> writer.add_hparams({'lr': 1e-5, 'bsize': 20, 'n_hidden': 100}, {'accuracy': 0, 'loss': 0}) >>> >>> writer = SummaryWriter() >>> writer.add_hparams({'lr': 1e-5, 'bsize': 20, 'n_hidden': 100}, {'accuracy': 0, 'loss': 0}) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'SummaryWriter' object has no attribute 'add_hparams' ``` (minimal code from: https://github.com/lanpa/tensorboardX/issues/502#issue-486036833) Also, passing `tb_writer=None` explicitly to `Trainer` does not ignore using the tensorboard because of [this](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L202). I think it might be more convenient if user is allowed the option to use/ignore tensorboard and further, `tensorboardX` should probably be first in `try:except` block when importing the `SummaryWriter` as it is easier to upgrade it than `torch` (@BramVanroy ).<|||||>@suamin The `Trainer` requires torch 1.3.1+, we'll make sure to mention this in the README.<|||||>Hello, I got this error even if I have a torch version = 1.5.1, I don't know why I0717 09:08:45.343556 139953119131392 trainer.py:208] You are instantiating a Trainer but W&B is not installed. To use wandb logging, run `pip install wandb; wandb login` see https://docs.wandb.com/huggingface. Traceback (most recent call last): File "run_ner.py", line 304, in <module> main() File "run_ner.py", line 229, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 429, in train self.tb_writer.add_hparams(self.args.to_sanitized_dict(), metric_dict={}) AttributeError: 'SummaryWriter' object has no attribute 'add_hparams'<|||||>see the source code: trainer.py you will find that using "from torch.utils.tensorboard import SummaryWriter" first, if not in current torch, then use "from tensorboardX import SummaryWriter". So, U need check your pytorch version. my torch 1.2.0, has "torch.utils.tensorboard.SummaryWriter", but it didn't has add_hparams. So you should update your pytorch. Also, U can change "trainer.py" source code, force import SummaryWriter from tensorboardX <|||||>I fixed the error, thank you !
transformers
4,510
closed
[HUGE] Refactoring tokenizers backend - padding - truncation - pre-tokenized pipeline - fast tokenizers - tests
Fix #4015 Edit @thomwolf: I morphed this in a large refactoring of the tokenizer code and test to make it more flexible and have a better API. Here is a summary of the changes. ## Breaking change There is no breaking change in the user-facing methods (`encode`, `encode_plus`, `batch_encode_plus`, `tokenize`, `convert_XXX`). There is a breaking change in the internal method`prepare_for_model` which is now a private method `_prepare_for_model` with a simplified signature. ## A new main user-facing method: `__call__` i.e. `model_input = tokenizer(text, **kwargs)` The extended encoding methods `encode_plus` and `batch_encode_plus` methods had names that could be intimidating for first-time users. A new main entry point is created as `tokenizer.__call__` which wraps both methods. You can feed `__call__` with single examples, a pair of sentence to encode together or batches of single/pair sentences. The signature of `__call__` is also a better fit for the πŸ€—nlp library when it comes to batches of pairs of sequences since the first and second elements in pair of sentences are supplied as separate arguments (see below) instead of a zipped list of pairs like in `batch_encode_plus`. While all the previously provided methods (`encode`, `encode_plus`, `batch_encode_plus`, `tokenize`, `convert_XXX`) are still supported without breaking changes, `__call__` is now the recommended way to encode all types of inputs when `tokenizer.encode` (which only return the list of input indices for a single sentence) is not enough i.e. for every case beside simple demo purposes. Here is how you should use this new entry point for encoding text in all the main use-cases: ```python from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-cased') # 1. When you encode "a single sentence" encoded_input = tokenizer("Hello I'm a single sentence") # { 'input_ids': [101, 8667, 146, 112, 182, 170, 1423, 5650, 102], # 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0], # 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]} # 2. When you encode "a pair of sentences in a single input" encoded_input = tokenizer("How old are you?", "I'm 6 years old") # { 'input_ids': [101, 1731, 1385, 1132, 1128, 136, 102, 146, 112, 182, 127, 1201, 1385, 102], # 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1], # 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} # 3. When you encode "a batch of single sentences" batch_sentences = ["Hello I'm a single sentence", "And another sentence", "And the very very last one"] encoded_input = tokenizer(batch_sentences) # { 'input_ids': [[101, 8667, 146, 112, 182, 170, 1423, 5650, 102], # [101, 1262, 1330, 5650, 102], # [101, 1262, 1103, 1304, 1304, 1314, 1141, 102]], # 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0], # [0, 0, 0, 0, 0], # [0, 0, 0, 0, 0, 0, 0, 0]], # 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1], # [1, 1, 1, 1, 1], # [1, 1, 1, 1, 1, 1, 1, 1]]} # You can batch (to max sequence size) and truncate (to max model length) # with `padding`and `truncation` (see more details in the next section on padding/truncation) encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) # { 'input_ids': [[101, 8667, 146, 112, 182, 170, 1423, 5650, 102], # [101, 1262, 1330, 5650, 102, 0, 0, 0, 0], # [101, 1262, 1103, 1304, 1304, 1314, 1141, 102, 0]], # 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0], # [0, 0, 0, 0, 0, 0, 0, 0, 0], # [0, 0, 0, 0, 0, 0, 0, 0, 0]], # 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1], # [1, 1, 1, 1, 1, 0, 0, 0, 0], # [1, 1, 1, 1, 1, 1, 1, 1, 0]]} # 4. When you encode "a batch of pair of sentences" batch_of_second_sentences = ["I'm a sentence that goes with the first sentence", "And I should be encoded with the second sentence", "And I go with the very last one"] encoded_input = tokenizer(batch_sentences, batch_of_second_sentences, padding=True, truncation=True) # { 'input_ids': [[101, 8667, 146, 112, 182, 170, 1423, 5650, 102, 146, 112, 182, 170, 5650, 1115, 2947, 1114, 1103, 1148, 5650, 102], # [101, 1262, 1330, 5650, 102, 1262, 146, 1431, 1129, 12544, 1114, 1103, 1248, 5650, 102, 0, 0, 0, 0, 0, 0], # [101, 1262, 1103, 1304, 1304, 1314, 1141, 102, 1262, 146, 1301, 1114, 1103, 1304, 1314, 1141, 102, 0, 0, 0, 0]], # 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], # [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0], # [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0]], # 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], # [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0], # [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0]]} ``` ## Padding/truncation The padding and truncation logic was simplified and improved to cover all the major uses-cases with the simplest possible API. Here is how to do the two most common use-cases for truncation/padding: ```python from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-cased') batch_sentences = ["Hello I'm a single sentence", "And another sentence", "And the very very last one"] # 1. No truncation and no padding encoded_input = tokenizer(batch_sentences) # 2. Pad to the max sequence length inside the provided batch # while truncating to the max input length acceptable by the model encoded_input = tokenizer(batch_sentences, truncation=True, padding=True) ``` The new API for padding and truncation uses three arguments to the encoding methods: `padding`, `truncation` and `max_length`. This new way to specify padding/truncation is available in all the user-facing encoding methods: `encode`, `encode_plus`, `batch_ecode_plus` and the newly provided `__call__`. All the previously provided ways to do padding/truncation (`truncation_strategy`, `max_length`, `pad_to_max_length`) are still supported without breaking changes but we recommend to use the new API. Here are the details of all the possible inputs to `padding`, `truncation` and `max_length`: - `padding` to control the padding (can be provided with a boolean or a string for finer-grained control). `padding` accepts the following values: * `True` or `'longest'`: pad to the longest sequence in the batch (or no padding if only a single sequence if provided), * `'max_length'`: pad to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`) * `False` or `'do_not_pad'` (default): No padding (i.e. can output batch with sequences of uneven lengths) - `truncation` to control truncation (can be provided with a boolean or a string for finer-grained control). `truncation` accepts the following values: * `True` or `'only_first'`: truncate to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`). This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided, * `'only_second'`: truncate to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`). This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided, * `'longest_first'`: truncate to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`). This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided, * `False` or `'do_not_truncate'` (default): No truncation (i.e. can output batch with sequences length greater than the model max admissible input size) - `max_length` to control the length of the padding/truncation (integer or `None`). `max_length` accepts the following values: * `None` (default): This will use the predefined model max length if required by one of the truncation/padding parameters. If the model has no specific max input length (e.g. XLNet) truncation/padding to max length is deactivated. * `any integer value` (e.g. `42`): Use this specific maximum length value if required by one of the truncation/padding parameters. Now here is a table summarizing the recommended way to setup `padding` and `truncation` as well as the previously provided way to do it (still supported but not recommended) in all cases. If you use pair of inputs sequence in any of the following examples, you can replace `truncation=True` by a `STRATEGY` selected in `['only_first', 'only_second', 'longest_first']`, i.e. `truncation='only_second'` or `truncation= 'longest_first'` to control how both sequence in the pair are truncated as detailed just before the table. We don't include all these variants for the sake of keeping the table not too long. | Truncation | Padding | Recommended way | Previously provided (still supported but not recommended) | --- | --- | --- | --- | | no truncation | no padding | `tokenizer(batch_sentences)` | `tokenizer.batch_encode_plus(batch_sentences)` | no truncation | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True)` or `tokenizer(batch_sentences, padding='longest')`| `tokenizer.batch_encode_plus(batch_sentences, pad_to_max_length=True)` | no truncation | padding to max model input length | `tokenizer(batch_sentences, padding='max_length')` | Not possible | no truncation | padding to specific length | `tokenizer(batch_sentences, padding='max_length', max_length=42)` | Not possible | | | | | | truncation to max model input length | no padding | `tokenizer(batch_sentences, truncation=True)` or `tokenizer(batch_sentences, truncation=STRATEGY)` | `tokenizer.batch_encode_plus(batch_sentences, max_length=tokenizer.max_len)` | truncation to max model input length | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True)` or `tokenizer(batch_sentences, padding=True, truncation=STRATEGY)` | Not possible | truncation to max model input length | padding to max model input length | `tokenizer(batch_sentences, padding='max_length', truncation=True)` or `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)` | `tokenizer.batch_encode_plus(batch_sentences, pad_to_max_length=True, max_length=tokenizer.max_len)` | truncation to max model input length | padding to specific length | Not possible | Not possible | | | | | | truncation to specific length | no padding | `tokenizer(batch_sentences, truncation=True, max_length=42)` or `tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)` | `tokenizer.batch_encode_plus(batch_sentences, max_length=42)` | truncation to specific length | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)` or `tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)` | Not possible | truncation to specific length | padding to max model input length | Not possible | Not possible | truncation to specific length | padding to specific length | `tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)` or `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` | `tokenizer.batch_encode_plus(batch_sentences, pad_to_max_length=True, max_length=42)` ## Pre-tokenized inputs The tokenizers now accept pre-tokenized inputs, i.e. inputs which are already sliced in words. The main reason for implementing a specific track for this type of inputs is to be able to use the fast mapping methods in `tokenizers` which provide character<=>token<=>words mappings. This can be very handy to easily compute labels and extract predictions for instance for Named-Entity-Recognition (NER) or Part-of-Speech tagging (POS tagging). If you want to use pre-tokenized inputs, just set `is_pretokenized=True` in any of the encoding methods. Here are some examples: ```python from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-cased') batch_sentences = [["Hello", "I'm", "a", "single", "sentence"], ["And", "another", "sentence"], ["And", "the", "very", "very", "last", "one"]] encoded_input = tokenizer(batch_sentences, is_pretokenized=True) # Pre-tokenized inputs can be used in all cases (single/pair/batch of single/batch of pairs) batch_of_second_sentences = ["I'm a sentence that goes with the first sentence".split(), "And I should be encoded with the second sentence".split(), "And I go with the very last one".split()] encoded_input = tokenizer(batch_sentences, batch_of_second_sentences, is_pretokenized=True, padding=True, truncation=True) ``` ## Verbose A new `verbose` argument is provided in all the encoding methods to silence all the warnings related to the length of the input as well as missing special tokens (e.g. missing padding or unknown token). ## Code organization `tokenization_utils.py` was starting to grow out of control and is now split into three files: - `tokenization_utils.py` hosts the code for the `PreTrainedTokenizers` - `tokenization_utils_fast.py` hosts the code for the `PreTrainedTokenizersFast` - `tokenization_utils_base.py` hosts the common methods for `PreTrainedTokenizers` and `PreTrainedTokenizersFast` (mostly the front API) in a newly created `PretrainedTokenizerBase` as well as all the common logic for special tokens (in `SpecialMixin`) and for the outputs of the encoding (in `BatchEncoding`). ## Full testing of fast tokenizers The fast tokenizers provided by the [tokenizers](https://github.com/huggingface/tokenizers) library are now fully tested and follow the same testing pipeline as the python (slow) tokenizers. Additional consistency tests have been added comparing the outputs of the fast and slow tokenizers under various conditions. ## TODO (following PRs) - Serialization for Fast tokenizers - Some edge cases for `add_tokens` on Fast tokenizers are not covered (spaces in tokens for byte-level and lower casing of the added tokens).
05-22-2020 00:18:23
05-22-2020 00:18:23
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=h1) Report > Merging [#4510](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935&el=desc) will **increase** coverage by `0.54%`. > The diff coverage is `92.01%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4510/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4510 +/- ## ========================================== + Coverage 76.89% 77.43% +0.54% ========================================== Files 128 130 +2 Lines 21854 21966 +112 ========================================== + Hits 16804 17010 +206 + Misses 5050 4956 -94 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.33% <ΓΈ> (ΓΈ)` | | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <ΓΈ> (+5.11%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `91.55% <91.55%> (ΓΈ)` | | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.59% <92.59%> (ΓΈ)` | | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.14% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <100.00%> (-0.80%)` | :arrow_down: | | [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.08% <100.00%> (+0.25%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <100.00%> (+0.12%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.82% <100.00%> (+0.14%)` | :arrow_up: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (-2.39%)` | :arrow_down: | | ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=footer). Last update [9931f81...52a30d6](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ok, I morphed this in a large refactoring of the tokenizer code and test to make it more flexible and have a better API. Here is a summary of the changes: - there is now a new main user-facing method: `__call__` i.e. model_input = tokenizer(text, **kwargs) which should be the main entry point for converting text in model inputs in the future, - the padding/truncation logic was refactored to cover more cases and make the most common-case more natural to access - pre-tokenized inputs (e.g. for NER or POS tagging) are handled a lot better - the backend code was refactored and split in several files. There is no breaking change in the user-facing methods (`encode`, `encode_plus`, `batch_encode_plus`, `tokenize`, `convert_XXX`). There is a breaking change in the internal method `prepare_for_model` which is now a private method `_prepare_for_model` with a simplified signature. All the details are given in the updated description of the PR. cc @LysandreJik @julien-c @patrickvonplaten @sshleifer @mfuntowicz @yjernite @srush @mariamabarham @lhoestq @VictorSanh @jplu @stefan-it @BramVanroy <|||||>I always love to see changes that improve the usability. I think using __call__ is one that can really make things easier for people to use. I also like pre-tokenized inputs a lot, since most of my data is pre-tokenized anyway. The changes are quite big to go over, so just checking: hopefully there are very clear error messages when users choose incompatible options when running the tokenization process. Making the tokenizer easier to use by having a single entry-point is great, but not so much if it can create more user mistakes that are not clear to the user. Clear error messages are key. A feature request, that I discussed with someone before but I don't remember who, is that it would be nice if the tokenizers could have an optional `device` argument. If we use return_tensors, it should return the tensors immediately on the given devices, e.g. ```python encoded_on_device = tokenizer(["Hello world.", "Who likes cookies?"], device=torch.device("cuda:0")) # or encoded_on_device = tokenizer(["Hello world.", "Who likes cookies?"], device=training_args.device) ``` Might even allow different type of values like device integers or "cuda" or "cpu" strings, and so on. Great job! Looking forward to using this in practice.<|||||>This is awesome!! Really great work and congratulations with this huge rework of the tokenizers!!! It is a bit too huge to go through everything but as far as I can see, the way to use the tokenizers now are way more accessible, mostly the pre-tokenizerd part. > A feature request, that I discussed with someone before but I don't remember who, is that it would be nice if the tokenizers could have an optional device argument. If we use return_tensors, it should return the tensors immediately on the given devices @BramVanroy I don't think it is the place here because it is not compliant with TF :) I think that the tokenizers should stay as much framework agnostic as possible otherwise if we start to say "if you want to use the tokenizer for PT do that, and for TF do this" it becomes more complicated to maintain. Of course this is only my opinion nothing more :)<|||||>> @BramVanroy I don't think it is the place here because it is not compliant with TF :) I think that the tokenizers should stay as much framework agnostic as possible otherwise if we start to say "if you want to use the tokenizer for PT do that, and for TF do this" it becomes more complicated to maintain. Of course this is only my opinion nothing more :) But that's what we do for `return_tensors` anyway, right?<|||||>> But that's what we do for return_tensors anyway, right? Exactly, and I think the same about this parameter, it adds complexity, while this can be easily done afterward.<|||||>> Exactly, and I think the same about this parameter, it adds complexity, while this can be easily done afterward. It is true that this can be done easily afterwards, but I suppose this is one of those cases: how much ease-of-use do you want your library to have while also taking into account the complexity of the library itself. My main argument is that from a usability perspective it would be awesome to be able to just provide your text to the tokenizer and you immediately get the encoded input back that you can feed to your model without having to do anything else. You then even do this: ```python out = model(**tokenizer(input_text, return_tensors="pt", device=device)) ``` This isn't pretty but it illustrates my point that it makes _usage_ very easy and also _easy to understand_. It removes a lot of booilerplate stuff that as a user you don't want to spend time on. On the other hand I definitely understand your point that this will lead to more complexity on the library's side. I'd be interested to hear other people's opinions about this.<|||||>> how much ease-of-use do you want your library to have while also taking into account the complexity of the library itself. This is definitely true, I fully agree :) And what you propose makes sense as well. I would be curious to hear other opinions too ^^<|||||>As seen with @thomwolf, will merge this PR as soon as the tests show all green. I'm updating all the library's docstrings to showcase best practices in a second PR.<|||||>Thanks for the update! I was writing my own tokenizer for some special inputs and saw the implementation for the `longest_first` truncation. Is there any reason why tokens are truncated one by one? It seems more efficient to truncate the longer one to the same length as the shorter one, and then truncate the same number of tokens from both of them. In this way, we need only 3 array slices in total, saving a lot of loops.
transformers
4,509
closed
Add packaging to setup.py
Running `pip install -e transformers` and then `python -c "import transformers"` fails on a fresh Docker container with the error: ```bash ModuleNotFoundError: No module named 'packaging' Thu May 21 21:59:44 2020<stderr>:Traceback (most recent call last): Thu May 21 21:59:44 2020<stderr>: File "/.../", line 37, in <module> Thu May 21 21:59:44 2020<stderr>: from transformers import ( Thu May 21 21:59:44 2020<stderr>: File "/fsx/transformers/src/transformers/__init__.py", line 350, in <module> Thu May 21 21:59:44 2020<stderr>: from .trainer import Trainer, set_seed, torch_distributed_zero_first, EvalPrediction Thu May 21 21:59:44 2020<stderr>: File "/fsx/transformers/src/transformers/trainer.py", line 14, in <module> Thu May 21 21:59:44 2020<stderr>: from packaging import version ``` Looks like this dependency was recently added, so adding it to setup.py requirements.
05-21-2020 22:11:54
05-21-2020 22:11:54
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=h1) Report > Merging [#4509](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/865d4d595eefc8cc9cee58fec9179bd182be0e2e&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4509/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4509 +/- ## ========================================== - Coverage 77.90% 77.88% -0.02% ========================================== Files 123 123 Lines 20472 20472 ========================================== - Hits 15949 15945 -4 - Misses 4523 4527 +4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/hf\_api.py](https://codecov.io/gh/huggingface/transformers/pull/4509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `93.06% <0.00%> (-4.96%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=footer). Last update [865d4d5...ffd7187](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This dependency was recently added, but it was not intended. It was removed with https://github.com/huggingface/transformers/commit/10d72390c029b3f139639621fb9a3a264560e05b. Thanks for offering a fix!
transformers
4,508
closed
FillMaskPipeline crashes when executed on TPU
# πŸ› Bug ## Information I am following the tutorial in https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=QDNgPls7_l13 and running on Google Colab using the TPU. The Pipeline object creation works fine, but when I try to run it on the example sentence, the Colab runtime crashes immediately with an unclear cause and no error message. If I remove the TPU and do not install xla, the pipeline works fine. ## To reproduce Steps to reproduce the behavior: ```python3 !pip uninstall transformers !git clone https://github.com/huggingface/transformers !pip install ./transformers VERSION = "nightly" !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !python pytorch-xla-env-setup.py --version $VERSION from transformers import pipeline fill_mask = pipeline( "fill-mask", model="drive/My Drive/models/EsperBERTo/output/checkpoint-15000", tokenizer="drive/My Drive/models/EsperBERTo" ) fill_mask("La suno <mask>.") ``` Is anyone else experiencing this?
05-21-2020 21:11:55
05-21-2020 21:11:55
Hello! Pipelines are not tested on TPUs yet, unfortunately, and we have not made any effort to support them on that device. We may down the road, once TPU CI is more easily available.
transformers
4,507
closed
Hard-coded force_download in run_squad forces expensive community download
# πŸ› Bug ## Information Using a community-registered model (albert, squad2) I noticed that there's no real caching going on during predict/evaluate. In an application that invokes run_squad dozens to hundreds of times, this adds significantly to processing time. This is due to at least one of the two force_download hard-codings in the run_squad.py script. It would be best to promote the force_download option into the run_squad arguments and let the user override. I have tested by manually modifying the force_download to be False and caching does occur (I haven't tested dirty cache refetch). Model I am using (Bert, XLNet ...): Community-submitted Albert v2 xxlarge fine-tuned for SQuAD2 on Torch, https://huggingface.co/mfeb/albert-xxlarge-v2-squad2 Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: * [ ] my own task or dataset: (give details below) ## To reproduce Use run_squad.py more than once on a community-installed model and see the fetch go to a different /tmp copy for each invocation. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Expect force_download to be overridable at run_squad invocation, especially for community-registered models. <!-- A clear and concise description of what you would expect to happen. --> Update run_squad.py to add a force_download argument (default of your choosing) and use the result in the two places force_download is hard coded. Better might be for the default value to be determined from the nature of the model location/type (e.g., no forced download for non-local models). ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: v2.2.1 - Platform: ubuntu - Python version: 3.7.7 - PyTorch version (GPU?): gpu 1.3.1 - Tensorflow version (GPU?): gpu 2.0.0 - Using GPU in script?: Y - Using distributed or parallel set-up in script?: N
05-21-2020 20:50:13
05-21-2020 20:50:13
Hi, thanks for the well-formulated question! Are you using the latest examples? When I look at the current branch, there is no force_downloading (any more) - it has been commented out: https://github.com/huggingface/transformers/blob/a08652772791fdaeed6f263b1a99926ca64be5dc/examples/question-answering/run_squad.py#L790 https://github.com/huggingface/transformers/blob/a08652772791fdaeed6f263b1a99926ca64be5dc/examples/question-answering/run_squad.py#L815<|||||>Yes - I saw that. I have been using v2.2.1, since upgrading has broken at least one of the tasks I'm performing. Until I can debug, I guess I can limp along by sed-replacing the True in the two instances of force_download.<|||||>Alright. In that case, I'm closing this since it's already "fixed" in the recent versions.
transformers
4,506
closed
[Summarization Pipeline]: Fix default tokenizer
`pipeline.tokenizer` cannot be a dict!
05-21-2020 20:33:04
05-21-2020 20:33:04
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=h1) Report > Merging [#4506](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a08652772791fdaeed6f263b1a99926ca64be5dc&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4506/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4506 +/- ## ========================================== - Coverage 77.83% 77.82% -0.01% ========================================== Files 123 123 Lines 20514 20514 ========================================== - Hits 15968 15966 -2 - Misses 4546 4548 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4506/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.11% <ΓΈ> (ΓΈ)` | | | [src/transformers/hf\_api.py](https://codecov.io/gh/huggingface/transformers/pull/4506/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `93.06% <0.00%> (-4.96%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4506/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4506/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.32%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=footer). Last update [a086527...70d3058](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,505
closed
add 2 colab notebooks
05-21-2020 18:31:05
05-21-2020 18:31:05
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=h1) Report > Merging [#4505](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a08652772791fdaeed6f263b1a99926ca64be5dc&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4505/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4505 +/- ## ======================================= Coverage 77.83% 77.84% ======================================= Files 123 123 Lines 20514 20514 ======================================= + Hits 15968 15969 +1 + Misses 4546 4545 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=footer). Last update [a086527...2293fe1](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Those are amazing notebooks! Would it maybe be possible to connect the Notebook "A Step by Step Guide to Tracking Hugging Face Model Performance" to a github account and link it from there? As it's done for other notebook ?<|||||>Merging for now - link can be updated at a later stage.
transformers
4,504
closed
SummarizationPipeline crashes
```python summarize = pipeline("summarization") summarize("Sam Shleifer writes the best docstring examples in the whole world.") ``` ➑️ ``` /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in _parse_and_tokenize(self, pad_to_max_length, *args, **kwargs) 461 # Parse arguments 462 inputs = self._args_parser(*args, **kwargs) --> 463 inputs = self.tokenizer.batch_encode_plus( 464 inputs, add_special_tokens=True, return_tensors=self.framework, pad_to_max_length=pad_to_max_length, 465 ) AttributeError: 'dict' object has no attribute 'batch_encode_plus' ```
05-21-2020 18:26:14
05-21-2020 18:26:14
Is this issue fixed in version 2.10.0?<|||||>@julien-c I still get the same error when doing ``` summarizer = pipeline('summarization') ``` and using it to summarize. However the following explicitely works for me: ``` summarizer = pipeline('summarization', model='bart-large-cnn', tokenizer='bart-large-cnn') ```<|||||>Yeah that sounds like this issue. It will be fixed in the next release or you can build from source with ```bash git clone [this repo] pip install -e . ```<|||||>> Yeah that sounds like this issue. It will be fixed in the next release or you can build from source with > > ```shell > git clone [this repo] > pip install -e . > ``` I have installed the package from GitHub repo but still have the same issue right now.<|||||>@khalilRhouma: It works for me at commit d976ef262e0b2c52363d201b2e14e5ecc42abbb3 , so you may need to `git pull` or some such. If that doesn't work I would love to see the output of ```bash transformers-cli env ``` <|||||>@sshleifer I get this error when I clone with that commit ID. KeyError: "Unknown task summarization, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask']" @dipanjanS Would be great to know what configuration you used<|||||>current master should also work.<|||||>@sshleifer The kernel still crashes Attaching the code. ``` from transformers import pipeline import torch !git clone https://github.com/huggingface/transformers.git %cd transformers `!pip` install -e ".[dev]" #summarizer = pipeline("summarization") summarizer = pipeline('summarization', model='facebook/bart-large-cnn', tokenizer='facebook/bart-large-cnn') ##Kernel dies after running this line ``` transformers version - 2.11.0 torch - 1.5.0<|||||>Can't replicate :(. Can I see your `transformers-cli env` output?<|||||>How do I get that output? I'm running these on Jupyter without any virtual env<|||||>Got it. - `transformers` version: 2.11.0 - Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.9 - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: <No> - Using distributed or parallel set-up in script?: <No><|||||>@sshleifer It works finally. There was a problem with GPU allocation. Thanks for your response.
transformers
4,503
closed
Fix convert_token_type_ids_from_sequences for fast tokenizers
Before this fix, the generic version of `convert_token_type_ids_from_sequences` from `tokenizer_utils` gets called when called on a `PreTrainedTokenizerFast`. The `type_ids` for the special token are thus not included. There is no way at the moment to get this information from the rust tokenizers, so we just use the implementation from the original python tokenizers. Tests added as well. Thanks @dirkgr for reporting this.
05-21-2020 17:10:40
05-21-2020 17:10:40
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=h1) Report > Merging [#4503](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a08652772791fdaeed6f263b1a99926ca64be5dc&el=desc) will **increase** coverage by `0.02%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4503/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4503 +/- ## ========================================== + Coverage 77.83% 77.86% +0.02% ========================================== Files 123 123 Lines 20514 20526 +12 ========================================== + Hits 15968 15982 +14 + Misses 4546 4544 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4503/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `95.00% <100.00%> (+0.12%)` | :arrow_up: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4503/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <100.00%> (+0.49%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4503/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4503/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=footer). Last update [a086527...795f44a](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,502
closed
How to finetune ELECTRA on glue?
After pretraining my own electra model, I wanted to test it out in Glue using run_glue.py. However I got this: ``` ValueError: Unrecognized configuration class <class 'transformers.configuration_electra.ElectraConfig'> for this kind of AutoModel: AutoModelForSequenceClassification. Model type should be one of DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, BartConfig, RobertaConfig, BertConfig, XLNetConfig, FlaubertConfig, XLMConfig. ``` After taking a look at the source code, It seems like ElectraConfig isn't available for sequence classification, is there a reason for that? Did anyone finetune electra on glue?
05-21-2020 15:46:09
05-21-2020 15:46:09
I have a pull request here https://github.com/huggingface/transformers/pull/4257<|||||>I just cloned your repo an tried to test with my model and it keeps saying the same: ![image](https://user-images.githubusercontent.com/21007166/82654565-39745300-9c21-11ea-8da8-ad3bebb8a391.png) Could you tell me how it's used?<|||||>@liuzzi's PR was merged this morning. The `ElectraForSequenceClassification` model is now available, so you can use it directly in `run_glue.py`. Please make sure to pull the latest changes from the repo, or to wait for `v2.10` which should be released in a few hours.<|||||>awesome, it works perfectly, thank you very much!<|||||>@elyesmanai Could you please share the code for pretraining Electra from scratch?<|||||>I'm using the simpletransformers library for the pretraining since transformers doesn't support it yet. here's a [link](https://towardsdatascience.com/understanding-electra-and-training-an-electra-language-model-3d33e3a9660d) to how you can do it, it's super easy. It's built on top of transformers so you can load the model into transformers and use the rest of the lib<|||||>The pre-training from scratch for `transformers` is available [here](https://github.com/huggingface/transformers/pull/4656). It is being tested right now.<|||||>This problem happened again, when I use ELECTRA on question-answering pipeline. My Transformers version is 2.11.0. > from transformers import pipeline, AutoTokenizer, AutoModelForQuestionAnswering > > tokenizer = AutoTokenizer.from_pretrained("ahotrod/electra_large_discriminator_squad2_512") > > model = AutoModelForQuestionAnswering.from_pretrained("ahotrod/electra_large_discriminator_squad2_512") > > albert_qa = pipeline('question-answering', model=model, tokenizer=tokenizer) ![image](https://user-images.githubusercontent.com/53075457/87435341-2fb60d00-c61e-11ea-91be-538747c70559.png)
transformers
4,501
closed
Pipelines do not control input sequences longer than those accepted by the model
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): DistilBERT Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce 1. Create a "sentiment-analysis" pipeline with a DistilBERT tokenizer and model 2. Prepare a string that will produce more than 512 tokens upon tokenization 3. Run the pipeline over such input string ```python from transformers import pipeline pipe = pipeline("sentiment-analysis", tokenizer='distilbert-base-uncased', model='distilbert-base-uncased') very_long_text = "This is a very long text" * 100 pipe(very_long_text) ``` ## Expected behavior The pipeline should control in some way that the input string will not overflow the maximum number of tokens the model can accept, for instance by limiting the number of tokens generated in the tokenization step. The user can't control this beforehand, as the tokenizer is run by the pipeline itself and it can be hard to predict into how many tokens a given text will be broken down to. One possible way of addressing this might be to include optional parameters in the pipeline constructor that are forwarded to the tokenizer. The current error trace is: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-1-ef48faf7ffbb> in <module> 3 pipe = pipeline("sentiment-analysis", tokenizer='distilbert-base-uncased', model='distilbert-base-uncased') 4 very_long_text = "This is a very long text" * 100 ----> 5 pipe(very_long_text) ~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs) 714 715 def __call__(self, *args, **kwargs): --> 716 outputs = super().__call__(*args, **kwargs) 717 scores = np.exp(outputs) / np.exp(outputs).sum(-1, keepdims=True) 718 return [{"label": self.model.config.id2label[item.argmax()], "score": item.max().item()} for item in scores] ~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs) 469 def __call__(self, *args, **kwargs): 470 inputs = self._parse_and_tokenize(*args, **kwargs) --> 471 return self._forward(inputs) 472 473 def _forward(self, inputs, return_tensors=False): ~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/transformers/pipelines.py in _forward(self, inputs, return_tensors) 488 with torch.no_grad(): 489 inputs = self.ensure_tensor_on_device(**inputs) --> 490 predictions = self.model(**inputs)[0].cpu() 491 492 if return_tensors: ~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) ~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/transformers/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, labels) 609 """ 610 distilbert_output = self.distilbert( --> 611 input_ids=input_ids, attention_mask=attention_mask, head_mask=head_mask, inputs_embeds=inputs_embeds 612 ) 613 hidden_state = distilbert_output[0] # (bs, seq_len, dim) ~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) ~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/transformers/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds) 464 465 if inputs_embeds is None: --> 466 inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim) 467 tfmr_output = self.transformer(x=inputs_embeds, attn_mask=attention_mask, head_mask=head_mask) 468 hidden_state = tfmr_output[0] ~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) ~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/transformers/modeling_distilbert.py in forward(self, input_ids) 89 90 word_embeddings = self.word_embeddings(input_ids) # (bs, max_seq_length, dim) ---> 91 position_embeddings = self.position_embeddings(position_ids) # (bs, max_seq_length, dim) 92 93 embeddings = word_embeddings + position_embeddings # (bs, max_seq_length, dim) ~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) ~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input) 112 return F.embedding( 113 input, self.weight, self.padding_idx, self.max_norm, --> 114 self.norm_type, self.scale_grad_by_freq, self.sparse) 115 116 def extra_repr(self): ~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1482 # remove once script supports set_grad_enabled 1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1485 1486 RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows. at /tmp/pip-req-build-808afw3c/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418 ``` ## Environment info ``` # Name Version Build Channel _libgcc_mutex 0.1 main _pytorch_select 0.2 gpu_0 _tflow_select 2.1.0 gpu absl-py 0.9.0 py36_0 asn1crypto 1.3.0 py36_0 astor 0.8.0 py36_0 attrs 19.3.0 py_0 backcall 0.1.0 py36_0 blas 1.0 mkl bleach 3.1.4 py_0 boto3 1.12.47 pypi_0 pypi botocore 1.15.47 pypi_0 pypi c-ares 1.15.0 h7b6447c_1001 ca-certificates 2020.1.1 0 certifi 2020.4.5.1 py36_0 cffi 1.14.0 py36h2e261b9_0 chardet 3.0.4 py36_1003 click 7.1.2 pypi_0 pypi cloudpickle 1.3.0 py_0 cryptography 2.8 py36h1ba5d50_0 cudatoolkit 10.1.243 h6bb024c_0 cudnn 7.6.5 cuda10.1_0 cupti 10.1.168 0 cycler 0.10.0 py36_0 cytoolz 0.10.1 py36h7b6447c_0 dask-core 2.15.0 py_0 dataclasses 0.7 pypi_0 pypi dbus 1.13.12 h746ee38_0 decorator 4.4.2 py_0 defusedxml 0.6.0 py_0 docutils 0.15.2 pypi_0 pypi eli5 0.10.1 pypi_0 pypi entrypoints 0.3 py36_0 expat 2.2.6 he6710b0_0 filelock 3.0.12 pypi_0 pypi fontconfig 2.13.0 h9420a91_0 freetype 2.9.1 h8a8886c_1 gast 0.3.3 py_0 glib 2.63.1 h5a9c865_0 gmp 6.1.2 h6c8ec71_1 google-pasta 0.2.0 py_0 grpcio 1.27.2 py36hf8bcb03_0 gst-plugins-base 1.14.0 hbbd80ab_1 gstreamer 1.14.0 hb453b48_1 h5py 2.10.0 py36h7918eee_0 hdf5 1.10.4 hb1b8bf9_0 icu 58.2 h9c2bf20_1 idna 2.8 py36_0 imageio 2.8.0 py_0 importlib_metadata 1.5.0 py36_0 intel-openmp 2020.0 166 ipykernel 5.1.4 py36h39e3cac_0 ipython 7.13.0 py36h5ca1d4c_0 ipython_genutils 0.2.0 py36_0 ipywidgets 7.5.1 py_0 jedi 0.16.0 py36_1 jinja2 2.11.1 py_0 jmespath 0.9.5 pypi_0 pypi joblib 0.14.1 py_0 jpeg 9b h024ee3a_2 json5 0.9.4 pypi_0 pypi jsonschema 3.2.0 py36_0 jupyter 1.0.0 py36_7 jupyter_client 6.1.2 py_0 jupyter_console 6.1.0 py_0 jupyter_core 4.6.3 py36_0 jupyterlab 2.1.2 pypi_0 pypi jupyterlab-server 1.1.4 pypi_0 pypi keras-applications 1.0.8 py_0 keras-base 2.3.1 py36_0 keras-gpu 2.3.1 0 keras-preprocessing 1.1.0 py_1 kiwisolver 1.1.0 py36he6710b0_0 ld_impl_linux-64 2.33.1 h53a641e_7 libedit 3.1.20181209 hc058e9b_0 libffi 3.2.1 hd88cf55_4 libgcc-ng 9.1.0 hdf63c60_0 libgfortran-ng 7.3.0 hdf63c60_0 libpng 1.6.37 hbc83047_0 libprotobuf 3.11.4 hd408876_0 libsodium 1.0.16 h1bed415_0 libstdcxx-ng 9.1.0 hdf63c60_0 libtiff 4.1.0 h2733197_0 libuuid 1.0.3 h1bed415_2 libxcb 1.13 h1bed415_1 libxml2 2.9.9 hea5a465_1 markdown 3.1.1 py36_0 markupsafe 1.1.1 py36h7b6447c_0 matplotlib 2.2.2 py36hb69df0a_2 mistune 0.8.4 py36h7b6447c_0 mkl 2020.0 166 mkl-service 2.3.0 py36he904b0f_0 mkl_fft 1.0.15 py36ha843d7b_0 mkl_random 1.1.0 py36hd6b4f25_0 nb_conda 2.2.1 py36_0 nb_conda_kernels 2.2.3 py36_0 nbconvert 5.6.1 py36_0 nbformat 5.0.4 py_0 ncurses 6.2 he6710b0_0 networkx 2.4 py_0 ninja 1.9.0 py36hfd86e86_0 notebook 6.0.3 py36_0 numpy 1.18.1 py36h4f9e942_0 numpy-base 1.18.1 py36hde5b4d6_1 olefile 0.46 py36_0 openssl 1.1.1g h7b6447c_0 packaging 20.3 py_0 pandas 0.23.0 py36h637b7d7_0 pandoc 2.2.3.2 0 pandocfilters 1.4.2 py36_1 parso 0.6.2 py_0 pcre 8.43 he6710b0_0 pexpect 4.8.0 py36_0 pickleshare 0.7.5 py36_0 pillow 7.0.0 py36hb39fc2d_0 pip 19.3.1 py36_0 prometheus_client 0.7.1 py_0 prompt-toolkit 3.0.4 py_0 prompt_toolkit 3.0.4 0 protobuf 3.11.4 py36he6710b0_0 ptyprocess 0.6.0 py36_0 pycparser 2.20 py_0 pygments 2.6.1 py_0 pyopenssl 19.1.0 py36_0 pyparsing 2.4.6 py_0 pyqt 5.9.2 py36h05f1152_2 pyrsistent 0.16.0 py36h7b6447c_0 pysocks 1.7.1 py36_0 python 3.6.10 hcf32534_1 python-dateutil 2.8.1 py_0 python-graphviz 0.14 pypi_0 pypi pytorch 1.4.0 cuda101py36h02f0884_0 pytz 2019.3 py_0 pywavelets 1.1.1 py36h7b6447c_0 pyyaml 5.3.1 py36h7b6447c_0 pyzmq 18.1.1 py36he6710b0_0 qt 5.9.7 h5867ecd_1 qtconsole 4.7.3 py_0 qtpy 1.9.0 py_0 readline 8.0 h7b6447c_0 regex 2020.4.4 pypi_0 pypi requests 2.22.0 py36_1 s3transfer 0.3.3 pypi_0 pypi sacremoses 0.0.41 pypi_0 pypi scikit-image 0.14.2 py36he6710b0_0 scikit-learn 0.22.1 py36hd81dba3_0 scikit-optimize 0.5.2 pypi_0 pypi scipy 1.4.1 py36h0b6359f_0 send2trash 1.5.0 py36_0 sentencepiece 0.1.86 pypi_0 pypi setuptools 46.1.3 py36_0 sip 4.19.8 py36hf484d3e_0 six 1.14.0 py36_0 sqlite 3.31.1 h62c20be_1 tabulate 0.8.7 pypi_0 pypi tensorboard 1.14.0 py36hf484d3e_0 tensorflow 1.14.0 gpu_py36h3fb9ad6_0 tensorflow-base 1.14.0 gpu_py36he45bfe2_0 tensorflow-estimator 1.14.0 py_0 tensorflow-gpu 1.14.0 h0d30ee6_0 termcolor 1.1.0 py36_1 terminado 0.8.3 py36_0 testpath 0.4.4 py_0 tk 8.6.8 hbc83047_0 tokenizers 0.7.0 pypi_0 pypi toolz 0.10.0 py_0 torchvision 0.5.0 py36_cu101 pytorch tornado 6.0.4 py36h7b6447c_1 tqdm 4.45.0 pypi_0 pypi traitlets 4.3.3 py36_0 transformers 2.9.1 pypi_0 pypi urllib3 1.25.8 py36_0 wcwidth 0.1.9 py_0 webencodings 0.5.1 py36_1 werkzeug 1.0.1 py_0 wheel 0.34.2 py36_0 widgetsnbextension 3.5.1 py36_0 wrapt 1.12.1 py36h7b6447c_1 xz 5.2.5 h7b6447c_0 yaml 0.1.7 had09818_2 zeromq 4.3.1 he6710b0_3 zipp 2.2.0 py_0 zlib 1.2.11 h7b6447c_3 zstd 1.3.7 h0b5b093_0 ``` <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - Platform: Linux matrix 4.4.0-174-generic #204-Ubuntu SMP Wed Jan 29 06:41:01 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux - Python version: Python 3.6.10 :: Anaconda, Inc. - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
05-21-2020 14:53:40
05-21-2020 14:53:40
Thanks for the well-structured question! It helps a lot in helping you. `pipeline` actually already accepts what you request: you can pass in a tuple for the tokenizer so that the first item is the tokenizer name and the second part is its kwargs. https://github.com/huggingface/transformers/blob/a08652772791fdaeed6f263b1a99926ca64be5dc/src/transformers/pipelines.py#L1784-L1790 You should be able to do something like this (not tested): ```python pipe = pipeline("sentiment-analysis", tokenizer=('distilbert-base-uncased', {'model_max_length': 128}), model='distilbert-base-uncased') ``` Though it is still odd that you got an error. By default the max model length should be used... cc @LysandreJik @thomwolf <|||||>I think the problem is the following. Here: https://github.com/huggingface/transformers/blob/e19b978151419fe0756ba852b145fccfc96dbeb4/src/transformers/pipelines.py#L463 The input is encoded and has a length of 701 which is larger then `self.tokenizer.model_max_length` so that the forward pass of the model crashes. A simple fix would be to add a statement like: ```python if inputs['input_ids'].shape[-1] > self.tokenizer.model_max_length: logger.warn("Input is cut....") inputs['input_ids'] = input['input_ids'][:, :self.tokenizer.model_max_length] ```, but I am not sure whether this is the best solution. I think the best solution would actually be to return a clean error message here and suggest to the user to use the option `max_length=512` for the tokenizer. The problem currently is though that when calling: ```python pipe(very_long_text) ``` no arguments for the `batch_encode_plus` function can be inserted because of two reasons: 1. Current the `TextClassificationPipeline` cannot accept a mixture of `kwargs` and `args`, see https://github.com/huggingface/transformers/blob/e19b978151419fe0756ba852b145fccfc96dbeb4/src/transformers/pipelines.py#L141 2. The `batch_encode_plus` function actually does not accept any **kwargs arguments currently, see https://github.com/huggingface/transformers/blob/e19b978151419fe0756ba852b145fccfc96dbeb4/src/transformers/pipelines.py#L464 IMO, it would be a good idea to do a larger refactoring here where we allow the pipelines to be more flexible so that `batch_encode_plus` **kwargs can easily be inserted. @LysandreJik <|||||>I too get the `RuntimeError: index out of range` error when using either the summarization or question-answering pipelines with text greater than their models' max_length. Presumably any pipeline, but I haven't tested. I've tried this without using any special models; that is, using the default model/tokenizer provided by the pipelines: `pipeline("summarization")(text)`. This is after an upgrade from 2.8.0 (working) to 2.11.0. Windows 10. LMK if want further code/environment details. Figured I might just be pitching something you already know, but in case it adds any surprise-factor I'll be happy to add more details / run some more tests.<|||||>I've also tried the tokenizer tuple approach, but same out-of-range error: ```python pipeline("summarization", tokenizer=('facebook/bart-large-cnn', {'model_max_length': 512}), model='facebook/bart-large-cnn')(text) # also tried: # pipeline("summarization", tokenizer=('facebook/bart-large-cnn', {'max_length': 512}), model='facebook/bart-large-cnn')(text) ``` <|||||>Currently, it is not possible to use pipelines with inputs longer than the ones allowed by the model. We should soon provide automatic cutting to max length in case the input is longer than allowed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@patrickvonplaten Hey Patrick, is there any progress on what you suggest i.e. automatically cutting to max length when the input is longer than that allowed by the model, when using pipeline.<|||||>You should now be able to pass `truncation=True` to the pipeline call for it to truncate sequences that are too long.<|||||>> You should now be able to pass `truncation=True` to the pipeline call for it to truncate sequences that are too long. How does this work exactly? I tried passing truncation=True to the pipeline call but it did not work.<|||||>It is not working for me either. Code to reproduce error is below. ``` text = ["The Wallabies are going to win the RWC in 2023."] ner = pipeline( task="ner", model=AutoModelForTokenClassification.from_pretrained(ner_model), tokenizer=AutoTokenizer.from_pretrained(ner_model), aggregation_strategy="average" ) ner(text, trucation=True) ``` Error message is: `_sanitize_parameters() got an unexpected keyword argument 'truncation'` <|||||>Hi All, Any update on this, I am still facing this issue. I tried passing the parameters(max_length=512, truncation=True) into the pipeline. But still getting the error(IndexError: index out of range in self). I have tried text classification for a sentence of length 900 and got this error. Any help will be highly appreciated. <|||||>Hi, Any news about this issue? I have the same problem as the person before.<|||||>@Pushkinue do you have your example handy ? The thing will depend on which pipeline you're using and the actual script.
transformers
4,500
closed
Longformer for question answering
This PR adds `LongformerForQuestionAnswering` following `RobertaForQuestionAnswering`. The code is almost identical to `RobertaForQuestionAnswering`, just had to remove `head_mask` parameter from forward method of `RobertaForQuestionAnswering`. Also trained the model to verify this. You can check inference [here](https://colab.research.google.com/drive/1WGgYuBEzGvkvhGOrQxB94Jr1nwCPyblu?usp=sharing) @patrickvonplaten
05-21-2020 14:35:07
05-21-2020 14:35:07
Could you add it to the automodel class too ?<|||||>@ibeltagy - could you also take a look here<|||||>Thank you @patil-suraj, this looks good. One thing I would suggest is to automatically configure the `attention_mask` with global attention on all the question tokens so that the user doesn't need to worry about it. Global attention is not important for a dataset with short documents like squad but crucial for tasks where the document is long. You can check [here](https://github.com/allenai/longformer/blob/master/scripts/triviaqa.py#L280) how we set the global attention mask for TriviaQA. It would be good to have something similar in the forward function of `LongformerForQuestionAnswering`; ``` if attention_mask is None: attention_mask = some_function(input_ids) # All ones. Twos for question tokens. Zero for padding tokens else: pass # do nothing ``` You will need to assume that you know where the question is in the input, usually at the beginning of the sequence, and usually separated from the rest with a certain metatag. Maybe we need extra input from the user to specify the separator tag. Notes about the code [here](https://github.com/allenai/longformer/blob/master/scripts/triviaqa.py#L280): - you don't need the padding step, it is already implemented in `LongformerModel` - this code assumes that the question length is the same for all examples, but we can't make that assumption here<|||||>@ibeltagy I did try creating `attention_mask ` automatically in forward method, but as you said this involves knowing where the question is (before or after the context) and the ids of `bos` and `sep` tokens. So model will need access to `tokenizer` to get ids or they'll need to be hardcoded. If we do this then I'm not sure how it will fit with the rest of the pipeline. So can we provide this as a utility ? Or can we do this in the tokenizer where the user can provide indices in the original string for which global attention should be applied ? @patrickvonplaten Could you provide some feedback here ?<|||||>good points, @patil-suraj. We already have access to `self.config.pad_token_id`, so maybe we can do the same to get access to `bos_token_id` and `sep_token_id`?<|||||>Just checked, `self.config.bos_token_id` and `self.config.eos_token_id` are available but not `self.config.sep_token_id`. How about adding a new argument to the forward function that specifies the separator token? This is more general because there are cases where the user wants to use a different separator token from `sep_token_id`. <|||||>@ibeltagy Yes,` self.config.bos_token_id` and `self.config.eos_token_id` are available. If I'm not wrong the `eos` and `sep` tokens are same for `LongformerTokenizer`. So we can do it two ways, either make it available in `self.config` or pass explicitly to the forward function. If we make it available in `self.config` then the existing `QuestionAnsweringPipeline` won't need to be modified, and the user can override the `self.config.sep_token_id` if its different from `sep_token_id`<|||||>πŸ‘ sounds good to me. <|||||>Thanks @ibeltagy , I'll try this and let you know.<|||||>In the long run we are planning on having a combined tokenizer and model config, so IMO it would be best to add a hardcoded `config.sep_token_id` to the Longformer config.<|||||>Okay, so adding `sep_token_id` in `config` and assuming question is at the beginning, can we do it this way ``` attention_mask = torch.ones_like(input_ids) for i in range(input_ids.shape[0]): sep_index = (input_ids[i, :] == self.config.sep_token_id).nonzero().min().item() attention_mask[i, :sep_index] = 2 # set 0 for padding values if input is padded if self.config.pad_token_id in input_ids[i, :]: pad_index = (input_ids[i, :] == self.config.pad_token_id).nonzero().min().item() attention_mask[i, pad_index:] = 0 ``` does this sound good to you ?<|||||>> Okay, so adding `sep_token_id` in `config` and assuming question is at the beginning, can we do it this way > > ``` > attention_mask = torch.ones_like(input_ids) > > for i in range(input_ids.shape[0]): > sep_index = (input_ids[i, :] == self.config.sep_token_id).nonzero().min().item() > attention_mask[i, :sep_index] = 2 > > # set 0 for padding values if input is padded > if self.config.pad_token_id in input_ids[i, :]: > pad_index = (input_ids[i, :] == self.config.pad_token_id).nonzero().min().item() > attention_mask[i, pad_index:] = 0 > ``` > > does this sound good to you ? Thanks a lot for your effort here @patil-suraj ! 1) I would prefer to not have a `for loop` here. I think it'd be nicer to just use tensor operations, exactly like @ibeltagy implemented it here: https://github.com/allenai/longformer/blob/e007ba9b52c550048e5981c8385980cc84359bc4/scripts/triviaqa.py#L411 I think you only have to replace `self.tokenizer.eos_token_id` with `self.config.sep_token_id`. 2) No need to pad the `input_ids` with ```python # set 0 for padding values if input is padded if self.config.pad_token_id in input_ids[i, :]: pad_index = (input_ids[i, :] == self.config.pad_token_id).nonzero().min().item() attention_mask[i, pad_index:] = 0 ``` I think you can remove that part of the code because our tokenizers automatically correctly put the 0 in `attention_mask`. Thinking a bit more about I'm actually not anymore 100% sure whether this function should be in the `forward()` function of the `LongformerForQuestionAnswering`. Maybe it would be better to have it in the tokenizer function...not sure...will have to think about it. Let's implement it in the forward function as suggested for now :-) <|||||>Thanks, @patil-suraj ! If you don't mind, I want to suggest one more thing to add. I think it will be useful if this function alerts the user when the number of global attention tokens is large or the question is on the wrong side. It will be good to add something like: ``` if num of global attention positions > max(self.config.attention_window)`: logger.warning('something something') ``` @patrickvonplaten, I see why you are thinking it might be better to have it in the tokenizer, but I think that it can quickly get complicated because the global attention setting needs to change based on the task.<|||||>@ibeltagy I think we will need to alert the user about that in all `Longformer` tasks, so can we add that warning in the base `LongformerModel` instead of `LongformerForQuestionAnswering` ? @patrickvonplaten I did tried to vectorize it, but that code assumes that all the questions in the batch have same length. So I'm not sure if we can make that assumption here . Also looking at this function ``` def _get_question_end_index(self, input_ids): eos_token_indices = (input_ids == self.tokenizer.eos_token_id).nonzero() assert eos_token_indices.ndim == 2 assert eos_token_indices.size(0) == 2 * input_ids.size(0) assert eos_token_indices.size(1) == 2 return eos_token_indices.view(input_ids.size(0), 2, 2)[:, 0, 1] ``` it seems that it makes the assumption that `eos_token_id/sep_token_id` occurs twice in the input, but if we use the default `sep_token_id` then it occurs three times in the input, if we encode que and context as input pair. So looking at all this, would it be better if we just provide this as a utility and keep the `forward` method same?<|||||>You are right about the variable number of global attention per batch, but it can still be vectorized, 1) In the function you mentioned, the following line ``` return eos_token_indices.view(input_ids.size(0), 2, 2)[:, 0, 1] ``` needs to change to the following because, as you said, you have 3 sep/eos tokens ``` return eos_token_indices.view(input_ids.size(0), 3, 2)[:, 0, 1] ``` 2) Now given `question_end_index` you can set the `attention_mask` as follows: ``` question_end_index = question_end_index.unsqueeze(dim=1) # size: batch_size x 1 # bool attention mask with True in locations of global attention attention_mask = torch.arange(input_ids.size(1)).expand_as(input_ids) < question_end_index attention_mask = attention_mask.int() + 1 # from True, False to 2, 1 ``` <|||||>Thanks ! @ibeltagy <|||||>Awesome work so far @patil-suraj - I think after the changes as discussed above, we can merge this :-) <|||||>Oh and one small thing I forgot to add. Could you add a test for `LongformerQuestionAnswering`. I think you can more or less copy this test here: https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/tests/test_modeling_bert.py#L311<|||||>> Awesome work so far @patil-suraj - I think after the changes as discussed above, we can merge this :-) Happy to contribute πŸ€—<|||||>> Awesome work so far @patil-suraj - I think after the changes as discussed above, we can merge this :-) Also should we add support for `QuestionAnsweringPipeline` before merging or should that be done in another PR ?<|||||>> > Awesome work so far @patil-suraj - I think after the changes as discussed above, we can merge this :-) > > Also should we add support for `QuestionAnsweringPipeline` before merging or should that be done in another PR ? Let's do this in another PR :-) <|||||>Ok great, I did a little change in the config @patil-suraj, but it looks good to merge for me now! @patil-suraj can you fix the code quality? It's actually quite easy to do: 1) run `flake8 src/transformers/modeling_longformer.py` - it will show you exactly which lines need to be fixed. In your case, all errors are redundant white spaces. Just delete and re-add lines 791, 794 and 798 without a white space and delete the trailing (at the end of the line) white spaces in line 800.<|||||>@ibeltagy - ok for you to be merged? <|||||>> Ok great, I did a little change in the config @patil-suraj, but it looks good to merge for me now! @patil-suraj can you fix the code quality? It's actually quite easy to do: > > 1. run `flake8 src/transformers/modeling_longformer.py` - it will show you exactly which lines need to be fixed. In your case, all errors are redundant white spaces. > Just delete and re-add lines 791, 794 and 798 without a white space and delete the trailing (at the end of the line) white spaces in line 800. Sure.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=h1) Report > Merging [#4500](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **increase** coverage by `0.21%`. > The diff coverage is `94.44%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4500/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4500 +/- ## ========================================== + Coverage 77.87% 78.09% +0.21% ========================================== Files 123 123 Lines 20566 20617 +51 ========================================== + Hits 16016 16100 +84 + Misses 4550 4517 -33 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `97.40% <94.00%> (+14.45%)` | :arrow_up: | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ΓΈ)` | | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.57% <100.00%> (ΓΈ)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=footer). Last update [a34a989...a198607](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ok great, all green - merging! Hope that's ok with you @ibeltagy <|||||>Looks great. Thanks, @patil-suraj.<|||||>https://colab.research.google.com/drive/1ZwnA8NCKOM4HBvaRRpjuVmAaM--x92hN?usp=sharing Tried to use the longformer with simpletransformers library and tried out the example but I am getting two different errors. the first error with simpletransformers is an assertion-error `AssertionError: There should be exactly three separator tokens in every sample for questions answering` The second error from example is and tensor error `TypeError: only integer tensors of a single element can be converted to an index`<|||||>> This PR adds `LongformerForQuestionAnswering` following `RobertaForQuestionAnswering`. > > The code is almost identical to `RobertaForQuestionAnswering`, just had to remove `head_mask` parameter from forward method of `RobertaForQuestionAnswering`. > > Also trained the model to verify this. You can check inference [here](https://colab.research.google.com/drive/1WGgYuBEzGvkvhGOrQxB94Jr1nwCPyblu?usp=sharing) > > @patrickvonplaten Btw @patil-suraj, feel free to upload the model you trained on the model hub. It's a `longformer-base-4096` fine-tuned on Squad no? It'd be great if you can upload the model: https://huggingface.co/transformers/model_sharing.html<|||||>> > This PR adds `LongformerForQuestionAnswering` following `RobertaForQuestionAnswering`. > > The code is almost identical to `RobertaForQuestionAnswering`, just had to remove `head_mask` parameter from forward method of `RobertaForQuestionAnswering`. > > Also trained the model to verify this. You can check inference [here](https://colab.research.google.com/drive/1WGgYuBEzGvkvhGOrQxB94Jr1nwCPyblu?usp=sharing) > > @patrickvonplaten > > Btw @patil-suraj, feel free to upload the model you trained on the model hub. It's a `longformer-base-4096` fine-tuned on Squad no? It'd be great if you can upload the model: https://huggingface.co/transformers/model_sharing.html Yes, I'm training the model as we are speaking :). The previous model was trained with question at the end so I'm training it again <|||||>No rush, whatsoever - it's already been super helpful your PR here :-) <|||||>> https://colab.research.google.com/drive/1ZwnA8NCKOM4HBvaRRpjuVmAaM--x92hN?usp=sharing > > Tried to use the longformer with simpletransformers library and tried out the example but I am getting two different errors. > > the first error with simpletransformers is an assertion-error > `AssertionError: There should be exactly three separator tokens in every sample for questions answering` > > The second error from example is and tensor error > `TypeError: only integer tensors of a single element can be converted to an index` The first assertion error is because the model expects the every input sequence encoded like this `<s> question</s></s> context</s>`. The model uses this assumption to set global attention on question tokens automatically. And this is how the longformer tokenizer encodes input pair by default. So make sure 1. You are encoding the input with question at the beginning and with 3 sep tokens 2. input_ids is always a batch of examples The second error because there's mistake in the example ``` start_scores, end_scores = model(torch.tensor([input_ids]), attention_mask=attention_mask) ``` this line should be ``` start_scores, end_scores = model(torch.tensor([input_ids]),attention_mask=torch.tensor([attention_mask])) ``` Thanks for pointing it out.<|||||>> `<s> question</s></s> context</s>` It will be good to mention this explicitly in the docstring. > Yes, I'm training the model as we are speaking @patil-suraj, I am curious what performance number you are getting. Do you mind sharing your training/evaluation scripts, I want to run a few more evals. <|||||>@ibeltagy Sure, here's the [colab](https://colab.research.google.com/drive/1zEl5D-DdkBKva-DdreVOmN0hrAfzKG1o?usp=sharing). I trained it bit naively though. Didn't focus much on data processing. I've used the new `nlp` library for dataset and `fast tokenizers` alignment method to get index of answer spans. But the metrics are quite good. `{'exact_match': 85.14664143803216, 'f1': 91.54157494727959}` Also the model is available [here](https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1) @patrickvonplaten <|||||>Very helpful. Thanks, @patil-suraj.<|||||>Awesome, thanks a lot for adding the model :-) Also feel free to add a little description, the results and a link to the colab to the model card of your newly added model: https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1 This way the community usually uses the model much more<|||||>@patil-suraj A very insightful notebook! Do you plan to write a similar one for text classification anytime soon?
transformers
4,499
closed
[T5] Fix Cross Attention position bias
This PR fixes the Cross Attention position bias assignment in Class T5Stack.
05-21-2020 14:28:30
05-21-2020 14:28:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=h1) Report > Merging [#4499](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a08652772791fdaeed6f263b1a99926ca64be5dc&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4499/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4499 +/- ## ========================================== - Coverage 77.83% 77.82% -0.02% ========================================== Files 123 123 Lines 20514 20514 ========================================== - Hits 15968 15964 -4 - Misses 4546 4550 +4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4499/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.53% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4499/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `95.16% <100.00%> (ΓΈ)` | | | [src/transformers/hf\_api.py](https://codecov.io/gh/huggingface/transformers/pull/4499/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `93.06% <0.00%> (-4.96%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4499/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=footer). Last update [a086527...e9775b2](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi @ZhuBaohe, Thansk for your PR! Can you explain a bit more in-detail what the fix is doing here? :-) <|||||>@patrickvonplaten I fixes a bug that the variable **encoder_decoder_position_bias** was incorrectly assigned by cross-attention weights, not by cross-attention position bias. See Line 745 of the file modeling_t5.py as follow: ``` # layer_outputs = hidden-states, -> 0 key-value-states, -> 1 (self-attention weights), -> 2 (self-attention position bias), -> 3 (cross-attention weights), -> 4 (cross-attention position bias) -> 5 ``` **encoder_decoder_position_bias** should be assigned by layer_outputs[5] instead of layer_outputs[4] .<|||||>Great, I agree with you. Previously the attention weights of the cross attention layer were taken instead of the bias. @LysandreJik @thomwolf I am quite surprised that we did not see an error earlier. I checked the slow tests and the summarization / translation results are equivalent as before. So good to merge for me!<|||||>Surprising indeed @patrickvonplaten , I did fix a similar bug when implementing T5. We should switch to NamedTuples one day πŸ˜„
transformers
4,498
closed
Pre-trained electra-large model doesn't converge when fine-tuned on SST-2
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): ELECTRA Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: SST-2 ## To reproduce Steps to reproduce the behavior: 1. Load the large pre-trained ELECTRA model using ElectraModel.from_pretrained('google/electra-large-discriminator', output_hidden_states=True) 2. fine-tune it on SST-2 using a simple binary classification head (linear, ReLU, linear, Sigmoid) on top of the [CLS] hidden state with BCEWithLogitsLoss and AdamW for 3-4 epochs. 3. Model is stuck on ~55% accuracy during all training and the loss increases Important: When I do the same with 'google/electra-base-discriminator', and 'google/electra-small-discriminator' I'm getting an accuracy of ~93% and 89% (respectively) on the first epoch. Hyperparameters: batch_size: 16 lr: 0.000005 adam_epsilon: 0.0000001 max_len: 32 ## Expected behavior I expected that the fine-tuned electra-large model will outperform the base and small electra model and have better results than ~50% accuracy ## Environment info - `transformers` version: 2.9.0 - Platform: Linux-5.3.0-1017-aws-x86_64-with-debian-buster-sid - Python version: 3.6.10 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
05-21-2020 14:20:32
05-21-2020 14:20:32
This is a general question and does not sound like a bug. With these large models it is sometimes hard to find a good set of hyperparameters to get the model to converge well. The same is true, and reported, for ALBERT. I don't think this is a bug.<|||||>@shanybarhom A good start would be to use the hyper-parameters mentioned in the [ELECTRA](https://arxiv.org/abs/2003.10555) paper :) Just refer to table 7. So your batch size, adam epsilon and epochs are very different compared to the ELECTRA parameters.<|||||>Thanks, @stefan-it I've tried to use the same hyper-parameters as mentioned in the ELECTRA paper (lr=0.00005, batch_size=32, adam_epsilon=0.000001, epochs=3), but the electra-large model still doesn't converge (accuracy of ~50%). @BramVanroy I thought it is a bug since the electra-base and the electra-small converge quite quickly (90% accuracy after the first epoch) with the same code and data, while the large model is stuck on ~50% during the training, so it felt like a bug, but of course, it may not.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I am facing the same problem with ELECTRA-large. I really appreciate if there is any further direction for this.
transformers
4,497
closed
Tokenize something with a "." in between Decode these ids, you will find it mismatch
πŸ› Bug Information Model I am using (Bert, XLNet ...): XLNet Language I am using the model on (English, Chinese ...): Chinese The problem arises when using: Tokenizer the official example scripts: (give details below) N/A my own modified scripts: (give details below) N/A The tasks I am working on is: Any an official GLUE/SQUaD task: (give the name) N/A my own task or dataset: (give details below) N/A To reproduce Steps to reproduce the behavior: Load any BERT tokenizer Tokenize something with a "." in between Decode these ids, you will find it mismatch x = tokenizer.encode('AN.C', add_special_tokens=False) z = tokenizer.decode(x) It prints: AN. C Expected behavior AN.C Environment info transformers version: Platform: CentOS Python version: 3.6 PyTorch version (GPU?): GPU Tensorflow version (GPU?): GPU Using GPU in script?: NO Using distributed or parallel set-up in script?: NO
05-21-2020 09:06:53
05-21-2020 09:06:53
Your use case seems specific, so maybe you should try a custom Tokenizer via the `tokenizers` library. I believe the results you're getting are the intended behavior. For example, any generic sentence where someone forgets to put a space after the period would end up tokenized incorrectly otherwise: `I love lamp.No I really love lamp.` would leave you with a token `lamp.No`, which is incorrect, eh?<|||||>tks~ it helps a lot ~
transformers
4,496
closed
python run_glue.py with the AttributeError: 'NoneType' object has no attribute 'seek'
# πŸ› Bug Traceback (most recent call last): File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/torch/serialization.py", line 191, in _check_seekable f.seek(f.tell()) AttributeError: 'NoneType' object has no attribute 'seek' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/transformers/modeling_utils.py", line 659, in from_pretrained state_dict = torch.load(resolved_archive_file, map_location="cpu") File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/torch/serialization.py", line 387, in load return _load(f, map_location, pickle_module, **pickle_load_args) File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/torch/serialization.py", line 549, in _load _check_seekable(f) File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/torch/serialization.py", line 194, in _check_seekable raise_err_msg(["seek", "tell"], e) File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/torch/serialization.py", line 187, in raise_err_msg raise type(e)(msg) AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./examples/text-classification/run_glue.py", line 202, in <module> main() File "./examples/text-classification/run_glue.py", line 133, in main cache_dir=model_args.cache_dir, File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/transformers/modeling_auto.py", line 874, in from_pretrained return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/transformers/modeling_utils.py", line 662, in from_pretrained "Unable to load weights from pytorch checkpoint file. " OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
05-21-2020 08:56:30
05-21-2020 08:56:30
You really need to put more effort in how you ask questions. Just throwing in some error trace and leaving it up to us to figure out what you want or where things go wrong is not the way to go. Use [**code blocks**](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks) and when using an example script, post your environment (as per the **template**) and post the command that you used. In your case it seems that you wanted to load a tensorflow model with PyTorch. That won't work. If you need to use Tensorflow models, use [run_tf_glue.py](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_glue.py) instead.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Have you solved it ?
transformers
4,495
closed
❓ [BART] Why Decoder Layer Normalization is applied only at the last layer ?
# ❓ Questions & Help It seems this line : https://github.com/huggingface/transformers/blob/efbc1c5a9d96048ab11f8d746fe51107cb91646f/src/transformers/modeling_bart.py#L524 was modified when MBART was added. --- Before, Layer Normalization was applied after **all** layers of the decoder (_similar to the encoder, if the config was set appropriately_). But now, Layer Normalization is applied **only at the end**, even for other BART models (_not MBART_). --- **Is it expected ? What's the reason behind this logic ?** @sshleifer
05-21-2020 07:37:20
05-21-2020 07:37:20
That `layer_norm` should be None in `bart-large*`. See [this comment](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L266) No final `layer_norm` was applied before `mbart` was added, afaict.
transformers
4,494
closed
Incorporate HuggingFace 'nlp' library in examples
# πŸš€ Feature request I propose we replace the custom data downloading/preprocessing logic found within the examples directory with the new [**HuggingFace `nlp` Library**](https://github.com/huggingface/nlp) where applicable. ## Motivation The examples directory is filled with custom shell scripts that download and process common research datasets. These scripts work great, but are at times tricky to follow. I'm sure this can be discouraging for new users looking to try out `transformers` for the first time. I'm hoping `nlp` will make the examples generally more accessible for both new and experienced users. And yeah...I guess it's probably not too bad for the brand either. πŸ˜‰ ## Your contribution I'll get a WIP PR pushed up this weekend. I'll focus on the `pytorch_lightning` examples for now.
05-21-2020 03:34:49
05-21-2020 03:34:49
Love this idea. @thomwolf @julien-c what do you guys think about adding `nlp` as a dependency in `examples/requirements.txt`<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.