repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
13,934
closed
Update bug-report.md
cc @stas, @patrickvonplaten, @sgugger, @NielsRogge, @anton-l, @Narsil, @Rocketknight1, @patil-suraj
10-08-2021 12:22:06
10-08-2021 12:22:06
transformers
13,933
closed
Wrong decoder_hidden_states from generate() function of BartForConditionalGeneration
I encountered a very strange bug when using **BartForConditionalGeneration** (facebook/bart-large-cnn) to generate summaries. Except for the generated sequences, I also need the decoder_hidden_states from the last layer of BART decoder. To check these hidden states, I feed the generation results to the BART decoder and get the hidden states output from the last layer of BART decoder. Some of the hidden states from **generate** function seems mismatch with hidden states produced by BART decoder. transformers 4.11.3 torch 1.7 here is my code, I got 2 generated results and its corresponding hidden states, then I feed the 2 generation results to BART decoder. The hidden states from the first beam works well but those of the second beam seems has some mistakes. ```from transformers import BartTokenizer, BartForConditionalGeneration import torch # do not need strictly equal def my_equal(tensor1, tensor2): return torch.mean(torch.abs(tensor1 - tensor2)).item() < 1e-5 max_length = 40 min_length = 10 pad_id = 1 beam_size = 2 # device = "cuda:0" device = 'cpu' tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn') model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn') hidden_size = model.config.hidden_size slines = [ "(CNN)James Best, best known for his portrayal of bumbling sheriff Rosco P. Coltrane on TV's \"The Dukes of Hazzard,\" died Monday after a brief illness. He was 88. Best died in hospice in Hickory, North Carolina, of complications from pneumonia, said Steve Latshaw, a longtime friend and Hollywood colleague. " "Although he'd been a busy actor for decades in theater and in Hollywood, Best didn't become famous until 1979, when \"The Dukes of Hazzard's\" cornpone charms began beaming into millions of American homes almost every Friday night. For seven seasons, Best's Rosco P. Coltrane chased the moonshine-running Duke boys back and forth across the back roads of fictitious Hazzard County, Georgia."] dct = tokenizer.batch_encode_plus(slines, max_length=512, return_tensors="pt", truncation=True, padding=True) text_id = dct["input_ids"].to(device) batch_size, seq_len = text_id.size(0), text_id.size(1) doc_inp_mask = ~(text_id == pad_id) output_dict = model.generate( input_ids=text_id.to(device), attention_mask=doc_inp_mask.to(device), num_return_sequences=beam_size, num_beam_groups=beam_size, num_beams=beam_size, diversity_penalty=2.0, max_length=max_length + 2, # +2 from original because we start at step=1 and stop before max_length min_length=min_length + 1, # +1 from original because we start at step=1 early_stopping=True, return_dict_in_generate=True, output_hidden_states=True ) candidate_id = output_dict["sequences"] # get the output ids decoder_hidden_states = output_dict["decoder_hidden_states"] # get the hidden_states from the last layer of decoder hidden_states_from_output = torch.cat( [decoder_hidden_states[i][-1] for i in range(len(decoder_hidden_states))], dim=1) # to check these hidden_states we refeed the generated ids to bart decoder enc_hidden_states = output_dict["encoder_hidden_states"][-1] dec_attn_mask = ~(candidate_id == pad_id) enc_attn_mask = doc_inp_mask.unsqueeze(1).expand(batch_size, beam_size, seq_len).contiguous().view(-1, seq_len) enc_hidden_states = enc_hidden_states.unsqueeze(1).expand(batch_size, beam_size, seq_len, hidden_size).contiguous().view(-1, seq_len, hidden_size) hidden_states_from_decoder, _ = model.get_decoder()(input_ids=candidate_id, attention_mask=dec_attn_mask, encoder_hidden_states=enc_hidden_states, encoder_attention_mask=enc_attn_mask, past_key_values=None, return_dict=False) # since min_len > 10 the third hidden_states of beam #0 should be equal output_test_beam0 = hidden_states_from_output[0][3] decoder_test_beam0 = hidden_states_from_decoder[0][3] print(output_test_beam0) print(decoder_test_beam0) print(my_equal(output_test_beam0,decoder_test_beam0)) # work well # the third hidden_states of beam #1 should be equal output_test_beam1 = hidden_states_from_output[1][3] decoder_test_beam1 = hidden_states_from_decoder[1][3] print(output_test_beam1) print(decoder_test_beam1) print(my_equal(output_test_beam1,decoder_test_beam1)) # cause problems ``` output: ``` tensor([ 0.7259, -0.6550, 0.1562, ..., 0.0010, -0.3777, 0.3754]) tensor([ 0.7259, -0.6550, 0.1562, ..., 0.0010, -0.3777, 0.3754], grad_fn=<SelectBackward>) True tensor([ 0.5364, -1.5238, 0.5194, ..., 0.6196, -0.7760, 0.2776]) tensor([ 0.7259, -0.6550, 0.1562, ..., 0.0010, -0.3777, 0.3754], grad_fn=<SelectBackward>) False ``` Is there any method I can get correct the decoder_hidden_states from generate() function ?
10-08-2021 10:29:00
10-08-2021 10:29:00
aha!I think I've found the answer, the generate() function does not align the generated ids with its corresponding hidden states. I will send a pull request once finishing the code.<|||||>That sounds great, thanks @ChenxinAn-fdu!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,932
closed
Licensing of LayoutLM models
Hi, Can you clarify the licensing of the layoutLM models provided in HuggingFace? It is very ambiguous right now. Ref: [https://github.com/microsoft/unilm/issues/352](url)
10-08-2021 09:39:00
10-08-2021 09:39:00
Hello @athuljays, thank you for your report. We're currently looking into it and will remove any ambiguity in the coming days.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I've updated the license of LayoutLMv2 and LayoutXLM on the hub to CC BY-NC-SA 4.0 as stated on [this page](https://github.com/microsoft/unilm/tree/master/layoutlmft).<|||||>@NielsRogge, How about the usage of just the code from HuggingFace? Could we use LayoutLMv2 code from HuggingFace, pretrain our own model from scratch and use it in production? <|||||>@NielsRogge, Can someone from your team confirm on the above comment: https://github.com/huggingface/transformers/issues/13932#issuecomment-1387206937
transformers
13,931
closed
'BertTokenizer' object has no attribute 'tokens_trie'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - Used machine: google colab - `transformers` version: 4.11.3 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): 2.6.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: - [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I have a trained model on custom tags (for invoices) 1. I passed a text block to the model using `model.predict([block])` 2. The error raised ``` Traceback (most recent call last): File "/usr/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "/usr/local/lib/python3.7/dist-packages/simpletransformers/ner/ner_utils.py", line 233, in convert_examples_with_multiprocessing for example in example_group File "/usr/local/lib/python3.7/dist-packages/simpletransformers/ner/ner_utils.py", line 233, in <listcomp> for example in example_group File "/usr/local/lib/python3.7/dist-packages/simpletransformers/ner/ner_utils.py", line 280, in convert_example_to_feature word_tokens = tokenizer.tokenize(word) File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils.py", line 476, in tokenize tokens = self.tokens_trie.split(text) AttributeError: 'BertTokenizer' object has no attribute 'tokens_trie' """ The above exception was the direct cause of the following exception: AttributeError Traceback (most recent call last) /tmp/ipykernel_1136/2044312605.py in <module> 1 extraction_info=[] 2 for txt in text_blocks.get_texts(): ----> 3 prediction, model_output = model_bert.predict([txt]) 4 extraction_info.append(prediction) 5 print(prediction, end='\n---\n') 12 frames /usr/local/lib/python3.7/dist-packages/simpletransformers/ner/ner_model.py in predict(self, to_predict, split_on_space) 1458 1459 eval_dataset = self.load_and_cache_examples( -> 1460 None, to_predict=predict_examples 1461 ) 1462 eval_sampler = SequentialSampler(eval_dataset) /usr/local/lib/python3.7/dist-packages/simpletransformers/ner/ner_model.py in load_and_cache_examples(self, data, evaluate, no_cache, to_predict) 1738 chunksize=args.multiprocessing_chunksize, 1739 mode=mode, -> 1740 use_multiprocessing_for_evaluation=args.use_multiprocessing_for_evaluation, 1741 ) 1742 /usr/local/lib/python3.7/dist-packages/simpletransformers/ner/ner_utils.py in convert_examples_to_features(examples, label_list, max_seq_length, tokenizer, cls_token_at_end, cls_token, cls_token_segment_id, sep_token, sep_token_extra, pad_on_left, pad_token, pad_token_segment_id, pad_token_label_id, sequence_a_segment_id, mask_padding_with_zero, process_count, chunksize, silent, use_multiprocessing, mode, use_multiprocessing_for_evaluation) 471 ), 472 total=len(examples), --> 473 disable=silent, 474 ) 475 ) /usr/local/lib/python3.7/dist-packages/tqdm/notebook.py in __iter__(self) 255 def __iter__(self): 256 try: --> 257 for obj in super(tqdm_notebook, self).__iter__(): 258 # return super(tqdm...) will not catch exception 259 yield obj /usr/local/lib/python3.7/dist-packages/tqdm/std.py in __iter__(self) 1178 1179 try: -> 1180 for obj in iterable: 1181 yield obj 1182 # Update and possibly print the progressbar. /usr/lib/python3.7/multiprocessing/pool.py in <genexpr>(.0) 323 result._set_length 324 )) --> 325 return (item for chunk in result for item in chunk) 326 327 def imap_unordered(self, func, iterable, chunksize=1): /usr/lib/python3.7/multiprocessing/pool.py in next(self, timeout) 746 if success: 747 return value --> 748 raise value 749 750 __next__ = next # XXX /usr/lib/python3.7/multiprocessing/pool.py in worker() 119 job, i, func, args, kwds = task 120 try: --> 121 result = (True, func(*args, **kwds)) 122 except Exception as e: 123 if wrap_exception and func is not _helper_reraises_exception: /usr/lib/python3.7/multiprocessing/pool.py in mapstar() 42 43 def mapstar(args): ---> 44 return list(map(*args)) 45 46 def starmapstar(args): /usr/local/lib/python3.7/dist-packages/simpletransformers/ner/ner_utils.py in convert_examples_with_multiprocessing() 231 mask_padding_with_zero, 232 ) --> 233 for example in example_group 234 ] 235 /usr/local/lib/python3.7/dist-packages/simpletransformers/ner/ner_utils.py in <listcomp>() 231 mask_padding_with_zero, 232 ) --> 233 for example in example_group 234 ] 235 /usr/local/lib/python3.7/dist-packages/simpletransformers/ner/ner_utils.py in convert_example_to_feature() 278 for i, (word, label) in enumerate(zip(example.words, example.labels)): 279 if example.tokenized_word_ids is None: --> 280 word_tokens = tokenizer.tokenize(word) 281 else: 282 word_tokens = example.tokenized_word_ids[i] /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils.py in tokenize() 474 475 no_split_token = set(self.unique_no_split_tokens) --> 476 tokens = self.tokens_trie.split(text) 477 # ["This is something", "<special_token_1>", " else"] 478 for i, token in enumerate(tokens): AttributeError: 'BertTokenizer' object has no attribute 'tokens_trie' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The predict method should return me a model_output and a list of tagged words: `[[{'Philip': 'NAME'}, {'Norris,': 'NAME'}, {'Inc.': 'O'}, {'Research': 'O'}, {'Counter': 'O'}, {'P.': 'O'}, {'0.': 'O'}, {'Box': 'O'}, {'3D': 'O'}, {'Richsond': 'NAME'}, {'6,': 'DATE'}, {'Virginia': 'LOCATION'}, {'Att:': 'O'}, {'Mr.': 'O'}, {'Frank': 'NAME'}, {'Resnik': 'NAME'}]]` <!-- A clear and concise description of what you would expect to happen. -->
10-08-2021 09:09:58
10-08-2021 09:09:58
Hello! Could you post a full reproducible code sample so that we may take a look? Thank you, cc @Narsil <|||||>Hi, `tokens_trie` should be initiated in the `__init__` of every slow tokenizer. Did you copy the tokenizer in some way ? It could be when the tokenizer get moved to the other thread maybe ?<|||||>I have the same issue with the `DistilBertTokenizer` when version was bumped to >=4.11.0 <Result AttributeError("'DistilBertTokenizer' object has no attribute 'tokens_trie'")><|||||>@gbmarc1 do you have more complete script so we can reproduce ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,930
closed
Fix fast tokenization problems
# What does this PR do? This PR addresses two problems of fast tokenization: 1. The special tokens specified directly from kwargs were not sanitized. (not added through `tokenizer._add_tokens` function.) **Edited: We finally decided to not include this change in this PR.** ```python from transformers import AlbertTokenizerFast tokenizer = AlbertTokenizerFast("tests/fixtures/spiece.model", mask_token="[OAO]") print(tokenizer._mask_token) print(tokenizer.tokenize('[OAO]')) # Outputs: # [OAO] # ['▁[', 'o', 'ao', ']'] # Fixed outputs: # [OAO] # ['[OAO]'] ``` 2. The special handling of `Albert`'s `[MASK]` token incorrectly normalizes texts to be matched. (Special tokens should exactly match original text not the normalized one.) ```python from transformers import AlbertTokenizerFast tokenizer = AlbertTokenizerFast("tests/fixtures/spiece.model") print(tokenizer._mask_token) print(tokenizer.tokenize('[MASK]')) # Outputs: # [MASK] # ['▁[', 'mask', ']'] # Fixed outputs: # [MASK] # ['[MASK]'] ``` ## Who can review? @LysandreJik @sgugger @SaulLu
10-08-2021 08:54:25
10-08-2021 08:54:25
> I'll let @SaulLu decide on tis since she is the one who suggested the changes :-) (FYI she's on vacation until the end of next week.) Thanks, no problem.<|||||>Hi @SaulLu, Thank you very much for your detailed reply! I'm sorry if I didn't explain the purpose of these changes clearly. For the first change, please temporarily ignore the inconsistency between the slow and fast ones as I just used the fast one as an example. You can see in the below snippet, the problem is that currently if we specify a new cls token, it will not be treated as a token as it does not exist in the vocabulary. However, after saving this tokenizer and loading it again through `from_pretrained` method, it will surprisingly be treated as a token because the `from_pretrained` method calls `tokenizer.sanitize_special_tokens()` as the below link shows. Therefore, I think this is a weird behavior that tokenizer states before reloaded and after reloaded are not the same. What do you think about this? https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/tokenization_utils_base.py#L1930 ``` tokenizer = AlbertTokenizerFast("tests/fixtures/spiece.model", cls_token="[OAO]") print(tokenizer.tokenize('[OAO]')) # Output: ['▁[', 'o', 'ao', ']'] tokenizer.save_pretrained("./test_storage") tokenizer = AlbertTokenizerFast.from_pretrained("./test_storage") print(tokenizer.tokenize('[OAO]')) # Output: ['[OAO]'] ``` Also, I was actually not aware of the workflow docstring you mentioned, maybe we can add this to constructor docstring or another place for a better noticeability? (Users might not be aware of this workflow if they don't use `from_pretrained` method.) For the second change, I will reply on your review threads. Thank you again for taking care of this!<|||||>Hi @qqaatw, Thank you so much for your detailed answers! :hugs: I perfectly understand why that this behavior surprises you now! Your proposal seems to me very relevant but I am still a bit worried about the side effects that this could bring because it is a change that will affect all tokenizers and I personnaly need some time to examine its impact - in particular given the previous workflow recommended in the docstring. On this subject, if this change is accepted, it will probably require dedicated tests. In order not to block the rest, maybe the best thing would be to divide this PR in two: the change of the arguments for the mask token on one side and the change of the value of a special token in the init on the other. What do you think about it?<|||||>Hello @SaulLu, Thanks for your reply! Sure, I can separate these to different PRs.
transformers
13,929
closed
AttributeError: 'SequenceClassifierOutput' object has no attribute 'pooler_output'
I am training the model `Distilbert` model for `multi-label` sequence classification. ``` from transformers import DistilBertTokenizer, DistilBertModel import torch tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') model = DistilBertModel.from_pretrained('distilbert-base-uncased',return_dict=True) n_classes=100 criterion = nn.BCELoss() Batch_size=16 class SRTagger(pl.LightningModule): def __init__(self, n_classes: int, n_training_steps=None, n_warmup_steps=None): super().__init__() self.save_hyperparameters() self.bert = DistilBertModel.from_pretrained('distilbert-base-uncased',return_dict=True) self.classifier = nn.Linear(self.bert.config.hidden_size, n_classes) self.n_training_steps = n_training_steps self.n_warmup_steps = n_warmup_steps self.criterion = nn.BCELoss() def forward(self, input_ids, attention_mask, labels=None): output = self.bert(input_ids, attention_mask=attention_mask) output = self.classifier(output.pooler_output) output = torch.sigmoid(output) loss = 0 if labels is not None: loss = self.criterion(output, labels) return loss, output ``` It throws me the error: **AttributeError: 'SequenceClassifierOutput' object has no attribute 'pooler_output'** Can I make changes to above code to overcome this?
10-08-2021 04:58:56
10-08-2021 04:58:56
As stated in the [docs](https://huggingface.co/transformers/model_doc/distilbert.html#distilbertmodel) of DistilBERT, DistilBertModel doesn't include a `pooler_output` in its outputs. It only includes: * `last_hidden_state` * `hidden_states` * `attentions`. By default, only `last_hidden_state` is returned. <|||||>> As stated in the [docs](https://huggingface.co/transformers/model_doc/distilbert.html#distilbertmodel) of DistilBERT, DistilBertModel doesn't include a `pooler_output` in its outputs. It only includes: > > * `last_hidden_state` > * `hidden_states` > * `attentions`. > > By default, only `last_hidden_state` is returned. What changes can we make to above code? Like including some layers to tackle this?<|||||>Actually, the easiest way to fine-tune DistilBERT for multi-label classification is my initializing a `DistilBertForSequenceClassification` model, setting `problem_type` to be "multi_label_classification": ``` from transformers import DistilBertForSequenceClassification model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", problem_type="multi_label_classification") ``` The problem_type argument is something that was added recently, the supported models are stated in the [docs](https://huggingface.co/transformers/master/main_classes/configuration.html?highlight=problem_type). In that way, it will automatically use the appropriate loss function for multi-label classification, which is the BCEWithLogitsLoss as can be seen [here](https://github.com/huggingface/transformers/blob/d70919e6d5629f3f4d19aefff3eec1219055f003/src/transformers/models/distilbert/modeling_distilbert.py#L764). In that way, you can easily provide your `labels` - which should be of shape (batch_size, num_labels).<|||||>> Actually, the easiest way to fine-tune DistilBERT for multi-label classification is my initializing a `DistilbertForSequenceClassification` model, setting `problem_type` to be "multi_label_classification": > > ``` > from transformers import DistilBertForSequenceClassification > > model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", problem_type="multi_label_classification") > ``` > > The problem_type argument is something that was added recently, the supported models are stated in the [docs](https://huggingface.co/transformers/master/main_classes/configuration.html?highlight=problem_type). In that way, it will automatically use the appropriate loss function for multi-label classification, which is the BCEWithLogitsLoss as can be seen [here](https://github.com/huggingface/transformers/blob/d70919e6d5629f3f4d19aefff3eec1219055f003/src/transformers/models/distilbert/modeling_distilbert.py#L764). > > In that way, you can easily provide your `labels` - which should be of shape (batch_size, num_labels). Then can we apply this on top of it? **self.classifier = nn.Linear(self.bert.config.hidden_size, n_classes=100)** How can we pass number of classes/labels here?<|||||>> Then can we apply this on top of it? > self.classifier = nn.Linear(self.bert.config.hidden_size, n_classes=100) No, that's actually what `DistilBertForSequenceClassification` does for you: it adds a linear layer on top of the base DistilBERT model. If you call the `.from_pretrained()` method on it, it will initialize the base model with the pretrained weights from the checkpoint on the [hub](https://huggingface.co/distilbert-base-uncased), but the linear layer on top (also called the classifier head) will have randomly initialized weights. This is also returned as a warning when you run it: ``` from transformers import DistilBertForSequenceClassification model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", problem_type="multi_label_classification") ``` returns the following warning: ``` Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.bias', 'classifier.weight', 'pre_classifier.weight', 'classifier.bias'] ``` These are the parameters of the classification head on top, which need to be fine-tuned (together with the base model) on your custom dataset.<|||||>> > Then can we apply this on top of it? > > self.classifier = nn.Linear(self.bert.config.hidden_size, n_classes=100) > > No, that's actually what `DistilBertForSequenceClassification` does for you: it adds a linear layer on top of the base DistilBERT model. If you call the `.from_pretrained()` method on it, it will initialize the base model with the pretrained weights from the checkpoint on the [hub](https://huggingface.co/distilbert-base-uncased), but the linear layer on top (also called the classifier head) will have randomly initialized weights. This is also returned as a warning when you run it: > > ``` > from transformers import DistilBertForSequenceClassification > > model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", problem_type="multi_label_classification") > ``` > > returns the following warning: > > ``` > Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.bias', 'classifier.weight', 'pre_classifier.weight', 'classifier.bias'] > ``` > > These are the parameters of the classification head on top, which need to be fine-tuned (together with the base model) on your custom dataset. **Got it. But where do I pass the number of classes?**<|||||>You can pass it when initializing the model (sorry should have mentioned that), like so: ``` from transformers import DistilBertForSequenceClassification model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", problem_type="multi_label_classification", num_labels=10) ``` <|||||>Thanks. Closing!!
transformers
13,928
closed
[megatron_gpt2] dynamic gelu, add tokenizer, save config
This PR improves `megatron_gpt2/convert_megatron_gpt2_checkpoint.py`: - dynamically figures out the correct activation function based on how megatron was trained - dynamically figures out the correct tokenizer and sets `config.tokenizer_class` - adds tokenizer files - switches to `config.save_pretrained` to ensure the config gets the right bits of the most recent `transformers` Some of this functionality has been discussed here https://github.com/huggingface/transformers/issues/13906
10-08-2021 00:33:13
10-08-2021 00:33:13
transformers
13,927
closed
valhalla/distilbart-mnli-12-3' is a correct model identifier listed on 'https://huggingface.co/models'
i was using AWS batch transform with this model and worked fine but since after last week, somehow it complains with this message: ```valhalla/distilbart-mnli-12-3' is a correct model identifier listed on 'https://huggingface.co/models'``` but I do see that this is a valid model config. Is this awared from your side ?
10-07-2021 22:53:05
10-07-2021 22:53:05
Hi! Could you post the full stack trace?<|||||>Hello! this was only related trace i retrived. Again, all works except when i try with aws batch transform with given model.tar.gz created from torchServe. It was working and somehow it started to fail with this error (cant find this model path from huggingface model) from last week..<|||||>if this happens only on AWS then @philschmid might have some idea.<|||||>Appreciate. Please do let me know as this is quiet unexpected failure which blocks me quiet hard :( <|||||>Hey @muiPomeranian, what are you using to run your Batch Transform job? What is the content of your `model.tar.gz` folder structure? We have a working example for batch transformer using the Hugging Face DLCs here: https://github.com/huggingface/notebooks/blob/master/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb Where you don't need to provide a `model.tar.gz` and can directly use the `MODEL_ID` from the Hub. <|||||>Thanks for the answer. I was providing model.tar.gz file created by torchServe which contains handler.py Finetuned model checkpoint and handler.py uses AutoTokenizer.from_pretrained(path) this was working fully since last week... but somehow it complains they cant find above given model config path from huggingface. Is there any issue regarding to this? We didnt make any changes on our batch transform script..<|||||>Not sure about how this works with using `torchserve` native `pytorch` implementation. If you take a look at the sample i provided, which uses a `HuggingFaceModel` you neither need to provide a `model.tar.gz` nor a `inference.py` to run a Batch transform job. ```python from sagemaker.huggingface.model import HuggingFaceModel # Hub Model configuration. <https://huggingface.co/models> hub = { 'HF_MODEL_ID':'valhalla/distilbart-mnli-12-3', 'HF_TASK':'summarization' } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( env=hub, # configuration for loading model from Hub role=role, # iam role with permissions to create an Endpoint transformers_version="4.6", # transformers version used pytorch_version="1.7", # pytorch version used py_version='py36', # python version used ) # create Transformer to run our batch job batch_job = huggingface_model.transformer( instance_count=1, instance_type='ml.p3.2xlarge', output_path=output_s3_path, # we are using the same s3 path to save the output with the input strategy='SingleRecord') # starts batch transform job and uses s3 data as input batch_job.transform( data=s3_file_uri, content_type='application/json', split_type='Line') ```<|||||>So is there any ongoing issue between aws sagemaker batch transform and huggingface model loading for tokenizer? I am wondering why suddenly i see this weird error.... Also, how can i use finetuned model with your provided example? I have finetuned model at my checkpoint and i packaged it before but seems the example you provided only works with your model in hf. (I cant release my finetuned model due to company policy) :(<|||||>> So is there any ongoing issue between aws sagemaker batch transform and huggingface model loading for tokenizer? I am wondering why suddenly i see this weird error.... Not sure what you mean by that, we haven't seen issue with batch transform and hugging face models, when using the `HuggingFaceModel`. I thought you were using the weights from the hub. You can provide them like that ```python # create Hugging Face Model Class huggingface_model = HuggingFaceModel( model_data=model_uri, # configuration for loading model from Hub role=role, # iam role with permissions to create an Endpoint transformers_version="4.6", # transformers version used pytorch_version="1.7", # pytorch version used py_version='py36', # python version used env=hub ) ``` `model_uri` is the s3 uri to your `model.tar.gz` and the `model.tar.gz` just needs to contain the required model & tokenizer files, e.g. `pytorch_model.bin` etc. You can create the archive like that, expecting `my_model` contains the weights and tokenizer files ```bash cd my_model && tar zcvf model.tar.gz * ``` When using your fine-tuned you might still need to provide the `hub` configuration with ```python hub = { 'HF_TASK':'summarization' } ```<|||||>Appreciate for the response again! I may have few more follow ups if you dont mind... (1) So instead of providing custom hanlder.py, we can directly use hf internal handler like hub = { 'HF_TASK':'classification' } And i guess multiRecord will be supported right? (2) And also is there any way that i can customize handler? Eg, i want to use customized input_id such as third_input_id and third_attention_mask. also is there any timeout i could customize? (3) Does this mean that i dont need to use torchServe to create model.tar.gz but just directly make tar file with folder which has weights and tokenizer file? cd my_model && tar zcvf model.tar.gz * thanks again !! <|||||>(1) Yes, we provide a custom hf internal handler which leverages the `pipelines` module and has a similar API as the Inference-API: documentation here: https://huggingface.co/docs/sagemaker/inference#inference-toolkit---api-description `multiRecord` is currently not supported by the hf internal handler. See (2) on how to leverage multiRecord (2) Yes you can customize your handler by creating an `inference.py` script in their you have can overwrite the same methods as you know including `input_fn`, `predict_fn`, `output_fn` and `model_fn`. There are two option to provide the `inference.py` afterwards, either you bundle it into the `model.tar.gz` but it needs to be in a `code/` or you add it to your `HuggingFaceModel` as `entry_point`: documentation here: https://huggingface.co/docs/sagemaker/inference#user-defined-codemodules (3) Yes you can easily manually create those `model.tar.gz` for the `HuggingFaceModel` <|||||>Hello again :( sorry but I swtiched tokenizer to `bart-large-mnli` and somehow now i am seeing ``` W-9069-model_0-stdout MODEL_LOG - requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/models/facebook/bart-large-mnli ``` Although sagemaker endpoint is able to load tokenizer from huggingface, but somehow batch transform only raise this error... Why now im seeing 403 error ... ? <|||||>Are you running with some VPC configuration? Can you please share the code you use? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,926
closed
Honor existing attention mask in tokenzier.pad
# What does this PR do? In #13914, it was highlighted that the `tokenizer.pad` method erases existing `attention_mask` instead of padding them. While it should definitely create them if they don't exist, it shouldn't erase a mask that is passed. This PR addresses that and adds a common test. Fixes #13914
10-07-2021 21:57:06
10-07-2021 21:57:06
transformers
13,925
open
[performance doc] add tables with DDP/DP/no-DP benchmarks
Users ask of DDP vs. DP vs. single GPU vs w/ w/o Nvlink and this summary is insufficient https://huggingface.co/transformers/performance.html#nvlink luckily we have great numbers posted - so should link to those from the guide: 1. table 1 where DDP wins by miles over any other setup: https://github.com/huggingface/transformers/issues/9371#issuecomment-768656711 2. table 2 where DDP is terrible https://github.com/huggingface/transformers/issues/9371#issuecomment-771302472
10-07-2021 21:24:57
10-07-2021 21:24:57
transformers
13,924
closed
rgwwfq
<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:x="urn:schemas-microsoft-com:office:excel" xmlns="http://www.w3.org/TR/REC-html40"> <head> <meta name=ProgId content=Excel.Sheet> <meta name=Generator content="Microsoft Excel 15"> <link id=Main-File rel=Main-File href="file:///C:/Users/BUSHRA~1/AppData/Local/Temp/msohtmlclip1/01/clip.htm"> <link rel=File-List href="file:///C:/Users/BUSHRA~1/AppData/Local/Temp/msohtmlclip1/01/clip_filelist.xml"> <style> <!--table {mso-displayed-decimal-separator:"\."; mso-displayed-thousand-separator:"\,";} @page {margin:.75in .7in .75in .7in; mso-header-margin:.3in; mso-footer-margin:.3in;} .font5 {color:#0F1419; font-size:15.0pt; font-weight:400; font-style:normal; text-decoration:none; font-family:"Segoe UI", sans-serif; mso-font-charset:0;} tr {mso-height-source:auto;} col {mso-width-source:auto;} br {mso-data-placement:same-cell;} td {padding-top:1px; padding-right:1px; padding-left:1px; mso-ignore:padding; color:black; font-size:11.0pt; font-weight:400; font-style:normal; text-decoration:none; font-family:Calibri, sans-serif; mso-font-charset:0; mso-number-format:General; text-align:general; vertical-align:bottom; border:none; mso-background-source:auto; mso-pattern:auto; mso-protection:locked visible; white-space:nowrap; mso-rotate:0;} .xl65 {border:.5pt solid windowtext;} .xl66 {text-align:left; vertical-align:middle; border:.5pt solid windowtext;} .xl67 {color:#1D9BF0; font-size:15.0pt; font-family:"Segoe UI", sans-serif; mso-font-charset:0;} --> </style> </head> <body link="#0563C1" vlink="#954F72"> https://twitter.com/i/events/1445809158612267009 -- https://twitter.com/i/events/1445810128742014983 https://twitter.com/i/events/1445810868453732354 https://twitter.com/i/events/1445811643233890307 https://twitter.com/i/events/1445813748271247363 https://twitter.com/i/events/1445814196017537027 https://twitter.com/i/events/1445815001994854418 https://twitter.com/i/events/1445816928778457091 https://twitter.com/i/events/1445851201002631176 https://twitter.com/i/events/1445851897542295552 https://twitter.com/i/events/1445852186282446852 https://twitter.com/i/events/1445852523601043462 https://twitter.com/i/events/1445852932113674240 https://twitter.com/i/events/1445853683195973633 https://twitter.com/i/events/1445854031482474498 https://twitter.com/i/events/1445854458487869443 https://twitter.com/i/events/1445854794799669248 https://twitter.com/i/events/1445855323672219650 https://twitter.com/i/events/1445855701239275525 https://twitter.com/i/events/1445856296096391168 https://twitter.com/i/events/1445856911652364288 https://twitter.com/i/events/1446068878837231621 https://twitter.com/i/events/1446069626182504449 https://twitter.com/i/events/1446071685527052294 </body> </html>
10-07-2021 20:59:54
10-07-2021 20:59:54
transformers
13,923
closed
Rewrite guides for fine-tuning with Datasets
# What does this PR do? This PR updates the current documentation for [Fine-tuning with custom datasets](https://huggingface.co/transformers/custom_datasets.html#). It removes the custom code for loading a dataset in favor of using the Datasets library for loading and preprocessing a dataset. The new guide also introduces the Keras method for compiling and fitting a model instead of using `TFTrainingArguments` and `TFTrainer`. ⚠️ The TensorFlow example for Question Answering still needs a little work. It returns `ValueError: No gradients provided for any variable` when calling `model.fit`. @Rocketknight1 has provided a temporary solution where we set `dummy_labels=True` in `tf_to_dataset`. This [notebook](https://drive.google.com/file/d/1irNIGeRtfUb-4x79ZBloNy49m_FW-sOy/view?usp=sharing) contains all the code examples shown in the guide. # To do: - [ ] Update the TensorFlow code example for question answering once we have a better solution.
10-07-2021 20:23:27
10-07-2021 20:23:27
Thank you, @stevhliu!
transformers
13,922
closed
Fixes a minor doc issue (missing character)
# What does this PR do? ``` `Dict[str, str], `optional` -> `Dict[str, str]`, `optional` ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-07-2021 15:59:35
10-07-2021 15:59:35
Thanks a lot Mishig!
transformers
13,921
closed
[Wav2Vec2] Fix mask_feature_prob
# What does this PR do? When using `mask_feature_prob > 0` it's important that the `attention_mask` is not passed as the attention_mask's dimension is `[batch_size, sequence_length]`. Since attention_mask = 1 vectors are padded anyways it doesn't matter whether some of those vectors are zero-ed out or not. Thus we should just never pass `attention_mask` for `mask_feature_prob > 0`. We however need it for `mask_time_prob` since `mask_time_prob` is used for `Wav2Vec2ForPretraining` where the created outputs will be used as labels.
10-07-2021 15:24:54
10-07-2021 15:24:54
@anton-l - feel free to merge if the change is OK for you :-)<|||||>Makes sense, thanks for adding the tests too! :+1:
transformers
13,920
closed
'BertEncoder' object has no attribute 'gradient_checkpointing'
### Who can help @LysandreJik ## Information The model I am using is Bert. I get an error when I call the function test(). The function definition of 'test' is as follows: ``` def test(): bert.eval() bert_outputs = [] with torch.no_grad(): for unw, data in enumerate(test_loader, 0): ids = data['ids'].to(device, dtype = torch.long) mask = data['mask'].to(device, dtype = torch.long) token_type_ids = data['token_type_ids'].to(device, dtype = torch.long) targets = data['targets'].to(device, dtype = torch.float) outputs = bert(ids, mask, token_type_ids) bert_outputs.extend(torch.sigmoid(outputs).cpu().detach().numpy().tolist()) return bert_outputs ``` The call log is as follows: ``` AttributeError Traceback (most recent call last) <ipython-input-51-833f1f639ea7> in <module>() 18 test_loader = DataLoader(test_dataset, **bert_test_params) 19 ---> 20 test_outputs = test() 21 22 test_outputs = np.array(test_outputs) <ipython-input-50-050b63b5247c> in test() 9 token_type_ids = data['token_type_ids'].to(device, dtype = torch.long) 10 targets = data['targets'].to(device, dtype = torch.float) ---> 11 outputs = bert(ids, mask, token_type_ids) 12 13 bert_outputs.extend(torch.sigmoid(outputs).cpu().detach().numpy().tolist()) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] <ipython-input-45-1b7273ac2d08> in forward(self, ids, mask, token_type_ids, return_dict) 7 8 def forward(self, ids, mask, token_type_ids, return_dict = False): ----> 9 unw, out_1 = self.layer1(ids, attention_mask = mask, token_type_ids = token_type_ids)[0], self.layer1(ids, attention_mask = mask, token_type_ids = token_type_ids)[1] 10 out_2 = self.layer2(out_1) 11 out_final = self.layer3(out_2) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 1003 output_attentions=output_attentions, 1004 output_hidden_states=output_hidden_states, -> 1005 return_dict=return_dict, 1006 ) 1007 sequence_output = encoder_outputs[0] /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 557 past_key_value = past_key_values[i] if past_key_values is not None else None 558 --> 559 if self.gradient_checkpointing and self.training: 560 561 if use_cache: /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 1129 return modules[name] 1130 raise AttributeError("'{}' object has no attribute '{}'".format( -> 1131 type(self).__name__, name)) 1132 1133 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None: AttributeError: 'BertEncoder' object has no attribute 'gradient_checkpointing' ``` I am trying to perform sentiment analysis on tweets. The same code worked well a few weeks back without giving any errors. However, now I get the error mentioned above. I tried searching online for possible fixes but could not find any for this specific problem.
10-07-2021 14:08:33
10-07-2021 14:08:33
Are you using `BertEncoder` rather than `BertModel`?<|||||>No, I am using BertModel. This is the definition of BERT Class. ``` class BERT(torch.nn.Module): def __init__(self): super().__init__() self.layer1 = transformers.BertModel.from_pretrained('bert-base-uncased') self.layer2 = torch.nn.Dropout(0.3) self.layer3 = torch.nn.Linear(768, 11) def forward(self, ids, mask, token_type_ids, return_dict = False): unw, out_1 = self.layer1(ids, attention_mask = mask, token_type_ids = token_type_ids)[0], self.layer1(ids, attention_mask = mask, token_type_ids = token_type_ids)[1] out_2 = self.layer2(out_1) out_final = self.layer3(out_2) return out_final model = BERT() model.to(device) ```<|||||>Probably related to this recent PR: #13657. Which version of Transformers are you on?<|||||>Thanks for sharing the PR. I am using 4.11.3 version of transformers.<|||||>@NielsRogge I am actually using a pre-trained bert model for a sentiment classification task. I was not able to completely follow the discussion on the PR that you have mentioned. Do the changes mean that I have re-train my model with a new set of parameters (with regards to gradient_checkpointing)? Also, is there a simpler workaround where I can use the same pre-trained models? If yes, what is it? Thanks for your help!<|||||>I'll let @sgugger handle this.<|||||>I am not able to reproduce the bug, @venkatesh-kulkarni you will need to give us a sample of code that fully reproduces it if you want us to help you.<|||||>So, here is the code that I am using. I am basically using a pre-trained bert base model fine-tuned on the SenWave dataset for multi-label sentiment classification. ``` MAX_LEN = 200 TRAIN_BATCH_SIZE = 1 VALID_BATCH_SIZE = 1 EPOCHS = 4 LEARNING_RATE = 1e-05 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') device = 'cuda' if torch.cuda.is_available() else 'cpu' ``` ``` class CustomDataset(Dataset): def __init__(self, dataframe, tokenizer, max_len): self.tokenizer = tokenizer self.dataframe = dataframe self.tweet = dataframe['Tweet'] self.targets = self.dataframe.list self.max_len = max_len def __len__(self): return len(self.tweet) def __getitem__(self, index): tweet = str(self.tweet[index]) tweet = " ".join(tweet.split()) inputs = self.tokenizer.encode_plus( tweet, None, add_special_tokens = True, max_length = self.max_len, pad_to_max_length = True, return_token_type_ids = True ) ids = inputs['input_ids'] mask = inputs['attention_mask'] token_type_ids = inputs['token_type_ids'] return { 'ids' : torch.tensor(ids, dtype = torch.long), 'mask' : torch.tensor(mask, dtype = torch.long), 'token_type_ids' : torch.tensor(token_type_ids, dtype = torch.long), 'targets' : torch.tensor(self.targets[index], dtype = torch.float) } ``` ``` class BERT(torch.nn.Module): def __init__(self): super().__init__() self.layer1 = transformers.BertModel.from_pretrained('bert-base-uncased') self.layer2 = torch.nn.Dropout(0.3) self.layer3 = torch.nn.Linear(768, 11) def forward(self, ids, mask, token_type_ids, return_dict = False): unw, out_1 = self.layer1(ids, attention_mask = mask, token_type_ids = token_type_ids)[0], self.layer1(ids, attention_mask = mask, token_type_ids = token_type_ids)[1] out_2 = self.layer2(out_1) out_final = self.layer3(out_2) return out_final model = BERT() model.to(device) ``` ``` bert = torch.load("path to pre-trained bert") bert ``` ``` def test(): bert.eval() bert_outputs = [] with torch.no_grad(): for unw, data in enumerate(test_loader, 0): ids = data['ids'].to(device, dtype = torch.long) mask = data['mask'].to(device, dtype = torch.long) token_type_ids = data['token_type_ids'].to(device, dtype = torch.long) targets = data['targets'].to(device, dtype = torch.float) outputs = bert(ids, mask, token_type_ids) bert_outputs.extend(torch.sigmoid(outputs).cpu().detach().numpy().tolist()) return bert_outputs ``` ``` new_df = pd.DataFrame() verses_df = pd.read_csv('path to csv file containing tweets') new_df['Tweet'] = verses_df['verse'] values = [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]] * len(verses_df) new_df['list'] = values test_dataset = CustomDataset(new_df, tokenizer, MAX_LEN) bert_test_params = {'batch_size': 1, 'shuffle': False, 'num_workers': 0 } test_loader = DataLoader(test_dataset, **bert_test_params) test_outputs = test() test_outputs = np.array(test_outputs) for i in range(test_outputs.shape[0]): for j in range(test_outputs.shape[1]): if test_outputs[i][j] >= 0.5: test_outputs[i][j] = 1 else: test_outputs[i][j] = 0 ``` Whenever I call test(), I get the error 'BertEncoder' object has no attribute 'gradient_checkpointing'. I hope these segments of code are enough to help you reproduce the error @sgugger . <|||||>I have no access to your personal files and ``` bert = torch.load("path to pre-trained bert") ``` cannot be run on my side. In general you should never save PyTorch models like this, just their weights. The problem that is probably happening is probably some conflict between the file you saved with a previous version of Transformers and the new one you have one your environment. You will need to revert your Transformers install to the same version you used to save your model to be able to load it again. Then I suggest you use the PyTorch recommended way of just saving the weights of your model, then reload those weights after initializing a `BERT` model. This way you will stay compatible across different versions of the library.<|||||>Updating the tranformers version to 4.10.1 fixed it. Thanks a lot for your help @sgugger <|||||>> Updating the tranformers version to 4.10.1 fixed it. Thanks a lot for your help @sgugger I met same error on 4.11.3, when I downgrade to 4.10.3, it became normal.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,919
closed
[Generation] Fix max_new_tokens
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #13880 @Narsil - IMO we can deprecate the MaxNewTokensStoppingCriteria - don't really see why one would use this instead of MaxLengthsStoppingCriteria no? ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-07-2021 10:27:12
10-07-2021 10:27:12
@patil-suraj - it would be great if you could do a second review and if you agree with the current code, feel free to merge. If you change your opinion having read @Narsil's comments - then let's maybe have a quick call about it all three of us<|||||>Documenting a longer internal discussion we had about the PR here: > Ok, I've thought about it again a bit - and I really think the following is the right approach: > Have `max_length` and `max_new_tokens` in generate(...) . Think we all agree that this is the correct way to go and that we should document the difference well (maybe also add some examples with `max_new_tokens` ). > Only have `MaxLengthCriteria` and deprecate MaxNewTokensCriteria -> this decision is not as straight forward and has the following pro's and con's where IMO the pro's are more important: > **Pro's:** > - Less error prone - stopping criteria cannot interfere with each other in sample , greedy , ... we prevent weird use cases that would currently break beam scorer's length penalty etc... > - More readable code & easier to understand code. The objective of sample , greedy was to be very readable. > - MaxNewTokensCriteria is not very useful from a user-experience - a user has to re-instantiate MaxNewTokensCriteria at every call of sample or `generate()` . It's more natural to define stopping criteria's and logits processors once and then leave them as is when calling` sample(...)` . I think almost all logits processors and stopping criteria are not dependent from a input variable at init - MaxNewTokensCriteria is. E.g., writing: ```python for input_ids in list: stop_criteria = MaxNewTokensCriteria(20, input_ids.shape[-1]) model.sample(input_ids, stop_criteria) ``` > is in general not very clean IMO > It's difficult to correctly initialize MaxNewTokensCriteria in an encoder-decoder setting. Here people might also just do `stop_criteria = MaxNewTokensCriteria(20, input_ids.shape[-1])` which would be wrong as the current init_length would be 1 (the BOS token the decoder starts with) > **Con's:** > - Just having `MaxLengthCriteria` might lead to silent errors as people think it's the max number of generated tokens and not the total number of tokens. This is a valid concern and we already know that this is the case for people using generate(...) . A remedy here is that 99% of people don't manually pass StoppingCriteria but will just use generate(...) . Also IMO this problem can be resolved by providing good docs. > - We have max_new_tokens but no `MaxNewTokensCriteria` . Don't think that this is a strong argument though as we have many generate inputs that have no corresponding logits processor, e.g. length_penalty > => So I'll merge the PR now as is and document the discussion we had here
transformers
13,918
closed
Adding support for tokens being suffixes or part of each other.
# What does this PR do? Fixes issues with reported utilization of protein tokens, where some special tokens might be part of other tokens, of suffixes of each other. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik @sgugger Ran slow tests on same box, and got the same results as last time. Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-07-2021 07:51:37
10-07-2021 07:51:37
transformers
13,917
closed
input decoder embedding for model.generate()
Hi, I'm using T5 model and is trying to generate a sentence using model.generate(). However, rather than using `decoder_input_ids` which gives the input_ids for the decoder part of T5, I would like to give a specific embedding I have to the decoder. Is there any way I can give specific decoder input embedding to model.generate()? I noticed that in T5 model, I can give the embedding by `decoder_inputs_embeds`. Does this work in model.generate() too? thanks :)
10-07-2021 07:50:03
10-07-2021 07:50:03
cc @patrickvonplaten <|||||>Hi there, I've also encountered the problem myself, while I've attempted to use GPT causal model instead of Google T5 as my base model. I was meant to use beam search for next sentence generation, while adding some self-defined vectors as prefix (which are not inside the embeddings). While tests on self-implemented algorithms is successful in predicting the sentence, the method is tremendously slow compared to encapsulated method `model.generate()`. But a change to the method generate() shows several error, for `input_ids` is a required parameter, while I was trying to use the concatenated embedding vector as input. The source doc shows that **model_kwargs will check for additional parameter `inputs_embeds`: `if input_ids is None and "inputs_embeds" not in model_kwargs:` but even if I inserted the parameter, the method will still check for `input_ids`, and throw out an error for there's no "input_ids" in input parameters.<|||||>@aaafc Hi, Thanks for sharing the issue. I looked through the codes in `generation_utils.py` and also noticed the same problem. Though we pass `decoder_inputs_embeds` in the `model.generate()`, `model_inputs` in https://github.com/huggingface/transformers/blob/2024faf1713272464336ad97f371787b24cb3c53/src/transformers/generation_utils.py#L1785 doesn't contain the argument. I could partially solve the issue by separating the case where `decoder_inputs_embeds` value is passed or not ``` if decoder_inputs_embeds is None: outputs = self( **model_inputs, return_dict=True, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) else: del model_inputs['decoder_input_ids'] outputs = self( **model_inputs, decoder_inputs_embeds=decoder_inputs_embeds return_dict=True, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) ``` However, the issue still exists in that the last part of the process where the embeddings change to ids and are passed on to the next iteration by adding into input_ids. (https://github.com/huggingface/transformers/blob/2024faf1713272464336ad97f371787b24cb3c53/src/transformers/generation_utils.py#L1851) This part should also be changed to a separate case where it is passed by ***ids*** or ***embeds***. I'm still working on how to change the whole iteration process by adding and passing the embedding rather than ids. It would be helpful if you leave any ideas or suggestions on this :) <|||||>Actually, I think I solved the issue by changing part under https://github.com/huggingface/transformers/blob/2024faf1713272464336ad97f371787b24cb3c53/src/transformers/generation_utils.py#L1773 as ``` iter_by_emb=True while True: ... if decoder_inputs_embeds is None: outputs = self( **model_inputs, return_dict=True, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) else: del model_inputs['decoder_input_ids'] outputs = self( **model_inputs, decoder_inputs_embeds=decoder_inputs_embeds return_dict=True, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) .... # argmax next_tokens = torch.argmax(next_tokens_scores, dim=-1) if iter_by_emb: next_embs = self.get_input_embeddings()(next_tokens).unsqueeze(0) .... input_ids = torch.cat([input_idds, next_tokens[:, None]], dim=-1) if iter_by_emb: decoder_inputs_embeds = next_embs ``` I checked by running code a modified code from huggingface doc of `T5ForConditionalGeneration` (https://huggingface.co/transformers/model_doc/t5.html#inference) ``` from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5ForConditionalGeneration.from_pretrained('t5-small') # training input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids labels = tokenizer('<extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids outputs = model(input_ids=input_ids, labels=labels) loss = outputs.loss logits = outputs.logits # inference input_ids = tokenizer("summarize: studies have shown that owning a dog is good for you", return_tensors="pt").input_ids # Batch size 1 outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # studies have shown that owning a dog is good for you. ``` to ``` from transformers import T5Tokenizer, T5ForCondititonalGeneration tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5ForConditionalGeneration.from_pretrained('t5-small') decoder_input_ids = tokenizer('summarize: I am happy', return_tensors='pt').input_ids decoder_inputs_embeds = model.get_input_embeddings()(decoder_input_ids) input_ids = tokenizer('summarize: studies have shown that owing a dog is good for you', return_tensors='pt').input_ids output_ids = model.generate(input_ids, decoder_input_ids=decoder_input_ids, max_length=30) print(tokenizer.decode(output_ids[0], skip_special_tokens=True)) output_embeds = model.generate(input_ids, decoder_inputs_embeds=decoder_inputs_embeds, max_length=30) print(tokenizer.decode(output_ids[0], skip_special_tokens=True)) ``` returns ``` summarize: I am happy i have been a dog since I was a dog owner. i have been a dog since I was a dog owner. ``` Though the generated sentence seems weird. I could get the same result except for the front part where I converted to embedding in the first place, "summarize: I am happy". <|||||>@amy-hyunji Hi, thanks for the reply. I'd like to share some other issues, though its still far from solving the problem, but hopefully they could do some help. Apart from the former mentioned parts where it tracks whether `input_ids` exists or not, there are also parts where it uses `input_ids` to convert other parameters or uses it for processing beam scores > https://github.com/huggingface/transformers/blob/9eda0d156dcf34203e3f7ea5ad220c960fd3227b/src/transformers/generation_utils.py#L1839-L1840 or logits > https://github.com/huggingface/transformers/blob/9eda0d156dcf34203e3f7ea5ad220c960fd3227b/src/transformers/generation_utils.py#L1806 Thus I think simply ignore `input_ids` and turn to `inputs_embeds` at the model section may not work due to `input_ids` will be used for other sections. Since the line > https://github.com/huggingface/transformers/blob/9eda0d156dcf34203e3f7ea5ad220c960fd3227b/src/transformers/generation_utils.py#L1785 Where it converts the model input just before the model section, I'm trying to overwrite the section implemented by Pretrained model > https://github.com/huggingface/transformers/blob/9eda0d156dcf34203e3f7ea5ad220c960fd3227b/src/transformers/models/gpt2/modeling_gpt2.py#L982-L1008 and re-write it to convert the `input_ids` to the embeddings inside the method. ``` class PrefixCausalGenerationModel(GPT2LMHeadModel, PrefixGenerationMixin): pass class PrefixGenerationMixin(GenerationMixin): def prepare_inputs_for_generation( self, input_ids: torch.LongTensor, past = None, **kwargs ): # re-write to convert input_ids here ``` I think this could retain `input_ids` for other processes, while it could mapped to some self-defined vectors here, without directly changing the model embeddings of base-model (In this case, GPT2). This is still in progress, though. <|||||>Wow, Thanks. It appears your solution is more clear. I'll try it myself lately.<|||||>Thanks for pointing out more issues. I do agree that when it goes to beam search, things are going to be more complicated so I started off from greedy_search. For the part about https://github.com/huggingface/transformers/blob/9eda0d156dcf34203e3f7ea5ad220c960fd3227b/src/transformers/generation_utils.py#L1806 based on my understanding, I believe there won't be any problem since by giving `decoder_inputs_embeds` to the model, we are simply expecting the model to generate the next token based on the embeddings and don't expect this to modify any part about the logits_processor which will work based on the `input_ids` that is generated by the model *based* on the given embeddings. However, this won't be an appropriate solution if you want the input embedding (decoder_inputs_embeds) to be converted into the most likely ids and return the value by `model.generate()` function. Also, I agree that overwriting `prepare_inputs_for_generration` is one of the solutions! :) <|||||>Hi there, Well, I've tried to apply the former solution on GPT-2, while it appears that it doesn't work for Casual LMs, since Casual LMs requires `input_ids` directly as the input, thus we cannot send in `inputs_embeds` and ignore `inputs_ids`, even after we modified `model()` section. As an alternative, I tried to implement the solution which changes `prepare_inputs_for_generation` method: ``` class GPT2LMHeadModel(GPT2PreTrainedModel): ... # other methods def prepare_inputs_for_generation(self, input_ids, past=None, **kwargs): ... # lines which modifies input_ids use_embeds = kwargs.get("use_embeds", False) if (use_embeds): print("uses embeds.") # to test whether embeddings are used or not embeddings = kwargs.get("embeddings", self.get_input_embeddings()) inputs_embeds = embeddings(input_ids) ... # other operations model_inputs = { "past_key_values": past, "use_cache": kwargs.get("use_cache"), "position_ids": position_ids, "attention_mask": attention_mask, "token_type_ids": token_type_ids, } if (use_embeds): model_inputs["inputs_embeds"] = inputs_embeds else: model_inputs["input_ids"] = input_ids return model_inputs ``` And uses `generate()` method as: ``` from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('gpt2') model = AutoModelForCausalLM.from_pretrained('gpt2') a = model.generate(tokenizer.encode("Hello world", return_tensors='pt').to(model.device), max_length=30) b = model.generate(tokenizer.encode("Hello world", return_tensors='pt').to(model.device), max_length=30, use_embeds=True) tokenizer.decode(a[0]) tokenizer.decode(b[0]) ``` and get ``` ... # Many lines of "uses embeds" 'Hello world, I\'m not sure what to say.\n\n"I\'m sorry, but I\'m not sure what to say.\n\n"' 'Hello world, I\'m not sure what to say.\n\n"I\'m sorry, but I\'m not sure what to say.\n\n"' ``` as the result. The method `generate()` here doesn't uses inputs_embeds as an input parameter, instead it requires an embedding matrix to map the input_ids to embeds. Thus, if we're willing to adding our own embedding vectors, we could create our own embedding matrix first. ``` def __init__(self, **kwargs): ... self.embeddings = self.model.get_input_embeddings() self.prefixed_embeddings = nn.Embedding(self.tokenizer.vocab_size, self.embeddings.embedding_dim) # the scale which is defined here doesn't really matter in following calculations ... # do this before each time calling generate() method # if were intended to use self-defined input vectors for pseudo tokens def update_prefixed_embeddings(self): prefix_matrix = ... # the source for the prefix matrix self.set_prefix_embeddings(prefix_matrix) def set_prefixed_embeddings(self, pseudo_tokens_matrix): self.prefixed_embeddings.weight.data = torch.cat((self.embeddings.weight.data.detach().to(self.device), pseudo_tokens_matrix), dim=0).to(self.device) assert len(self.prefixed_embeddings.weight.data) == (self.origin_vocab_size + len(self.pseudo_tokens_list)) ``` Then, uses ``` trainer.update_prefixed_embeddings() a = trainer.model.generate(trainer.tokenizer.encode("[PROMPT1]Hello[PROMPT2] world[PROMPT3]", return_tensors='pt').to(model.device), max_length=30, use_embeds=True, embeddings=trainer.prefixed_embeddings) trainer.tokenizer.decode(a) ``` and get ``` ... # Also many lines of "uses embeds" '[PROMPT1]Hello[PROMPT2] world[PROMPT3] world, the world of the world of the world of the world of the world of the world of the world of the world' ``` Well, it's ... even weirder after appending prefixes; but at least there's some changes. It's a good start. By the way, I've also attempted to use ``` class PrefixGenerationMixin(GenerationMixin): def prepare_inputs_for_generation( class PrefixCausalGenerationModel(AutoModelForCausalLM, PrefixGenerationMixin): pass ``` to avoid directly changing source code, but it doesn't work, since the model will not goes to the overwritten method but call the original one at `transformers.models.gpt2.modeling_gpt2.prepare_inputs_for_generation`. I'm attempting to find a way on improving this, well, later, though. <|||||>Thanks for sharing! I have one question about the implementation. Is it necessary to create a new embedding matrix first if we want to use embeddings that are not in the model embeddings? I have the embeddings in another file so I thought that if we keep the embeddings in the right shape by concatenating them directly in the previous embeddings, updating the model embeddings isn't necessary. Also, have you tried getting the score with the modified codes? I tried with the code I shared previously and the results were so low. I was wondering if you had similar issues with your codes.<|||||>Thanks for the reply. As for the question, although my previous implementation created a new embedding matrix, it appears that we don't really need to create one. I did this cause I'm not sure whether uses the old embedding matrix with some additional rows during generation will influence the original embeddings or not, and I intended to prevent them from modifications. But since > https://github.com/huggingface/transformers/blob/ca2ef7dfcdf2f4f59f7935c058e3cf042596cb03/src/transformers/generation_utils.py#L639-L640 already prevent any gradients to be recorded, or one could simply freeze the parameters, there's no need to create a new embedding matrix. As for the scores, my code is still on training process. The code uses web_nlg dataset (provided by HuggingFace datasets package) and GPT-2 as backbone model, and current logging does not show pretty good losses as well. Could be a lot of reasons, though, probably bad prompting templates, few prompt tokens, unfit initialization for the prefix matrix, or maybe just not performance well during training process. (I just hope the reason is not that prompting is not suitable for generation tasks :( ) I'm attempting to uses more flexible initializations or templates, but that is after this round is finished. My GPU is pretty old, so it takes a few days to train.<|||||>I'll also be working on this so if I find any other issues that would help, I'll leave some notes here. Thanks for sharing :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This should have been solved by https://github.com/huggingface/transformers/pull/14443 I think :-)
transformers
13,916
closed
Add missing whitespace to multiline strings
# What does this PR do? When running a text generation experiment, I got the following error message: > Input length of input_ids is 104, but ``max_length`` is set to 100.This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``. The error is missing a space before `This can lead`. Checking the source, the issue turned out to be that the string was wrapped across multiple lines, but there was no whitespace within the string itself to separate the two sentences. I searched for `"\s+f?"` throughout all of `src/transformers`, and this PR contains a fix for all similar whitespace errors I could find. The good news is that most multiline strings already handled whitespace properly, but there were still a lot that didn't. My search probably missed a few, but I did not feel like being exhaustive here. I also made a couple of other error message fixes related to whitespace and punctuation, but focused on this specific whitespace issue. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? "Documentation" is the closest item to this PR, so @sgugger
10-07-2021 05:46:52
10-07-2021 05:46:52
transformers
13,915
closed
[trainer] memory metrics: add memory at the start report
While debugging a user report https://github.com/huggingface/transformers/issues/13616, I have noticed when we use deepspeed w/ zero3 it allocates some memory on GPU before Trainer's init, so this memory doesn't get reported and then I got a mysterious negative delta: ``` train_mem_gpu_alloc_delta = -153MB train_mem_gpu_peaked_delta = 2012MB ``` so it was very confusing. This PR adds another 2 metrics which report memory usage at the very start of the Trainer (init), so now the report becomes: ``` before_init_mem_gpu = 312MB train_mem_gpu_alloc_delta = -153MB train_mem_gpu_peaked_delta = 2012MB ``` and it becomes obvious that some memory was allocated before the Trainer started and now the minus can be added to a non-zero initial value. For symmetry added the cpu memory usage as well. Examples of the new report (deleted all, but memory metrics): ``` $ CUDA_VISIBLE_DEVICES=0 PYTHONPATH=src python examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --do_train --train_file tests/fixtures/sample_text.txt --fp16 true \ --output_dir output --overwrite_output_dir 1 --per_device_train_batch_size 1 --report_to none \ --skip_memory_metrics 0 ***** train metrics ***** before_init_mem_cpu = 1945MB before_init_mem_gpu = 0MB init_mem_cpu_alloc_delta = 3249MB init_mem_cpu_peaked_delta = 443MB init_mem_gpu_alloc_delta = 487MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 151MB train_mem_cpu_peaked_delta = 0MB train_mem_gpu_alloc_delta = 1431MB train_mem_gpu_peaked_delta = 562MB $ PYTHONPATH=src deepspeed --num_gpus 1 examples/pytorch/language-modeling/run_clm.py \ --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path gpt2 --do_train \ --train_file tests/fixtures/sample_text.txt --fp16 true --output_dir output \ --overwrite_output_dir 1 --per_device_train_batch_size 1 --report_to none \ --skip_memory_metrics 0 ***** train metrics ***** before_init_mem_cpu = 5262MB before_init_mem_gpu = 312MB init_mem_cpu_alloc_delta = 0MB init_mem_cpu_peaked_delta = 0MB init_mem_gpu_alloc_delta = 0MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 3288MB train_mem_cpu_peaked_delta = 465MB train_mem_gpu_alloc_delta = -153MB train_mem_gpu_peaked_delta = 2012MB ``` For the non-DS case, the gpu memory usage is usually 0 and it's reflected in the report, and non-0 with DS. Additionally I prepared several other possible reports if someone puzzles over the math or has a complex situation. Left those in the comments if someone needs them. @sgugger
10-07-2021 04:40:24
10-07-2021 04:40:24
transformers
13,914
closed
Data collator ignores the input attention_mask
## Environment info - `transformers` version: 4.11.3 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sgugger @patrickvonplaten ## Information The DataCollators do not respect the attention_mask passed to them. Here's the context: I was trying to train a GPT2 model where the input_ids are: ``` <PROMPT TOKEN IDS> + <COMPLETION TOKEN IDS> ``` I want to mask out the completion tokens during training, so I set the attention_mask to: ``` [1] * (no. of prompt tokens) + [0] * (no. of completion tokens) ``` The labels are the same as the input_ids, except that the prompt token IDs are replaced with -100 in order to ignore the predictions for prompt tokens in the loss and only penalize the model on predicting completion tokens. Here is a toy example of the data that would be passed to the data collator following that format: ```python features = [ {"input_ids": [1,2,3,4,5,6], "attention_mask": [1,1,1,1,1,0], "labels": [-100,-100,-100,-100,-100,6]}, {"input_ids": [1,2,3], "attention_mask": [1,1,0], "labels": [-100,-100,4]}, ] ``` 1-5 in the first example are the prompt token IDs and 6 is the label token ID. 1 and 2 in the second example are the prompt token IDs and 3 is the label token ID. I would expect that DataCollatorForSeq2Seq(features) would return the following attention_mask: ``` [[1,1,1,1,1,0], [1,1,0,0,0,0]] ``` i.e., padding the second example to make it the same length as the first example. Instead it returns: ``` [[1,1,1,1,1,0], [1,1,1,0,0,0]] ``` Notice that the attention_mask[1,2] is 1 instead of 0 (3 ones instead of 2 ones at the start of the 2nd example), which unmasks the completion token for the 2nd example. ## To reproduce See this colab: https://colab.research.google.com/drive/1DcvH01SyfIPBwlzqiUCPaSjWKvkAoD2E#scrollTo=wHAfa4t5sZHg I've also copied and passed the relevant code below: ```python import transformers tokenizer = transformers.GPT2Tokenizer.from_pretrained('distilgpt2') tokenizer.pad_token = tokenizer.unk_token features = [ {"input_ids": [1,2,3,4,5,6], "attention_mask": [1,1,1,1,1,0], "labels": [-100,-100,-100,-100,-100,6]}, {"input_ids": [1,2,3], "attention_mask": [1,1,0], "labels": [-100,-100,4]}, ] output = transformers.DataCollatorForSeq2Seq(tokenizer)(features) x = output['attention_mask'][0, :].tolist() print(x) # [1, 1, 1, 0, 0, 0] assert x == [1,1,0] + [0] * 3 ```
10-06-2021 22:43:22
10-06-2021 22:43:22
This is an unexpected behavior that takes its root in the `tokenizer.pad` method (you can do `tokenizer.pad` on your features and see it re-creates the attention mask). This is because all models expect the attention mask to indicate where padding has been applied. In your case, you are training a model with unusual attentions masks (in the sense that it does not only indicate the pad tokens) so you should redefine your own data collators and avoid using the `tokenizer.pad` method.<|||||>Thanks so much for the quick reply! I've written a custom data collator that solves my problem, but I want to ask one more related question before closing the issue. Is there any way to fine-tune a GPT-style model on a conditional generation task that does not involve defining the attention_mask as I have above? If there's some more conventional way of doing that, then I can switch to that approach and use the pre-defined DataCollators. Otherwise, it seems strange to call this way of defining the attention mask "unusual" (though I understand you mean that in a particular sense), because in this case it would be the "usual" way to fine-tune a GPT-style model on a conditional generation task. I'll also note that the Hugging Face glossary (https://huggingface.co/transformers/glossary.html#attention-mask) defines an attention_mask as indicating "which tokens should be attended to, and which should not" rather than indicating which tokens are padding and which are not. Thanks in advance for your help with this issue!<|||||>What you say make sense, and I think we should we fix the problem at its souce, e.g. the `pad` method of the `tokenizer`, to honor the `attention_mask` passed. I will look into this tomorrow.<|||||>After some more thought, I don't think I need to mask out the completion during training. The GPT model just predicts the next word based on the previous ones. So I could drop the attention_mask from my example above, the default DataCollator would work fine and that would be the standard way of fine-tuning a GPT model on a conditional text generation task. **Is this correct?** I couldn't find an example of fine-tuning using an autoregressive model in Hugging Face's [collection](https://github.com/huggingface/transformers/blob/master/examples/pytorch) of examples. I found an example of conditional text generation using an already fine-tuned autoregressive model [here](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-generation/run_generation.py) and an example of fine-tuning using sequence to sequence models [here](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py). All that being said, I still think honoring the attention mask is a good idea, but I wanted to update this thread, because if the above is correct, then the attention_mask I provided in my first comment is indeed unusual.<|||||>@sgugger Could you answer my last question in the thread just to tie up loose ends? Thanks so much for resolving the issue with the attention_mask!<|||||>I think you can drop the custom attention mask in this case, yes.
transformers
13,913
closed
Getting out of vocabulary tokens for pretrained models.
Is there a way to get the out of vocabulary tokens? For example to add them to the tokenizer later on?
10-06-2021 21:24:58
10-06-2021 21:24:58
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,912
closed
`from_pretrained` doesn't validate its kwargs
splitting off from https://github.com/huggingface/transformers/issues/13895 ``` AutoModel.from_pretrained(mname, reviison=ckpt) # dyslexic typist ``` Throws no exception and it definitely should, since this could be unnoticed for a long time and meanwhile the user gets incorrect behavior. Expected behavior: There should be an assert if a user passes an argument that is not expected by the function. @LysandreJik, @sgugger
10-06-2021 20:13:29
10-06-2021 20:13:29
I believe this is an issue with a specific model class: ``` from transformers import AutoModel model = AutoModel.from_pretrained("bert-base-cased", reviison="main") ``` gives me ``` TypeError: __init__() got an unexpected keyword argument 'reviison' ``` Which checkpoint are you using?<|||||>Digging more, I think you're using a GPT-like model, and this is caused by [this line](https://github.com/huggingface/transformers/blob/5be59a364961a8e2fc986f1276cba977db87512a/src/transformers/models/gpt2/modeling_gpt2.py#L456) or the equivalent in other GPT model classes. Not sure what it achieves but it definitely causes the bug you are seeing. **EDIT** Digging even more, the subclasses are actually not passing any kwargs, so I don't get the issue with GPT checkpoints. Do you have a checkpoint I can use to reproduce the issue?<|||||>> Which checkpoint are you using? gpt2 indeed. > Do you have a checkpoint I can use to reproduce the issue? https://huggingface.co/bigscience/gpt2-1b3-en<|||||>I don't seem to have access to that model, as it gives me an 404 error. `"gpt2"` uses the same class, `GPT2LMHeadModel`, and does not produce the bug on master. It also looks like that model config was not saved using the `from_pretrained` method since it's not formatted properly, I don't know if that has something to do with the issue at hand. Could you upload a smaller, public model that reproduces the bug ?<|||||>I thought you had a god mode on the hub, please re-try now? the model was generated by the conversion script that runs https://github.com/huggingface/transformers/blob/master/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py if it generates an invalid config, we should fix that script then.<|||||>I was missing the `use_atuh_token=True` to get the download going, but the following sample does not give me the bug: ``` from transformers import AutoModel model = AutoModel.from_pretrained("bigscience/gpt2-1b3-en", reviison="main", use_auth_token=True) ``` gives me the error: ``` TypeError: __init__() got an unexpected keyword argument 'reviison' ```<|||||>indeed, let me try with the original model that was reported to have this issue. Also you mentioned invalid config please see my updated comment above.<|||||>This [line](https://github.com/huggingface/transformers/blob/5be59a364961a8e2fc986f1276cba977db87512a/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py#L303) in the script does not save the config in the same format as the others. I don't know why they don't use the method of the config to properly save it.<|||||>Ha, the typo in your comment was a perfect use case: ``` python -c "from transformers import AutoTokenizer, AutoModel; mname = 'bigscience/gpt2-1b3-en'; \ AutoModel.from_pretrained(mname, use_auht_token=True)" ``` I suppose the validation happens much later in the game. The above should assert, instead it fails to access the model. And my apologies for not posting a reproducible example from the get going - my bad! I was trying to report on someone else's behalf. <|||||>> This line in the script does not save the config in the same format as the others. I don't know why they don't use the method of the config to properly save it. I will see what I can do about fixing that. It's probably a really old script.<|||||>> The above should assert, instead it fails to access the model. I'm not sure if this what you meant, but the above cannot return an error it received a wrong kwarg as there is an error coming before we get to the end of using all the kwargs. We can't validate the kwargs at that level since we don't even know which model class to use, so we don't know which kwargs it will accept. So sadly there is nothing we can do about misspelled `use_auth_token=True` or other arguments that result in the urls of the model being wrong.<|||||>So here we have a bit of a chicken and egg problem, the args that are needed to get to the hub aren't validated until after the files are **successfully** fetched. I understand. So it'd be the same issue with misspelled `revision` - it'd ignore it, and give the user the `main` branch instead. And then fail if there is actually a model under `main`. It was failing for us in a different way since we didn't have anything under `main` (different repo: `bigscience/tr3d-1B3-oscar-checkpoints`). All checkpoints were in branches. That explains why it was failing then and not reporting a misspelled kwargs. That's quite the conundrum for the 2 kwargs then: `revision` and `use_auth_token` (not sure if there are other critical ones). <|||||>I suppose our use case is probably rare and it's probably ok to leave it as is. I think the only issue is `use_auth_token` It'd probably help the user if the hub were to report 403 (access denied), rather than: ``` requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/bigscience/gpt2-1b3-en/resolve/main/config.json ``` then the user is likely to review their config and discover they are either missing `use_auth_token` or it is misspelled. Moreover, if the client side got 403 - it could have suggested to the user to pass `use_auth_token=True`.<|||||>on the server side, 403 rather than 404 leaks info because you can "discover" repo names for instance. That's why 404 is commonly used (on GitHub, etc)<|||||>Understood. Thank you for this insight, Julien. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,911
closed
[Trainer] Fix nan-loss condition
Fixes a change in the nan loss handling logic introduced by #13896 With `is_torch_tpu_available()==False` and `(torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))==False` the `else:` condition was never reached, so the training loss was always `0.0`.
10-06-2021 16:30:46
10-06-2021 16:30:46
transformers
13,910
closed
tensorflow LED global_attention_mask shape mismatch error
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Platform: Linux-5.4.0-48-generic-x86_64-with-Ubuntu-20.04-focal - Python version: 3.6.13 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten @Rocketknight1 ## Information Model I am using (Bert, XLNet ...): 'allenai/led-base-16384' via AutoModelForSeq2SeqLM The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run the TensorFlow version of a simple test script. ```python from transformers import TFAutoModelForSeq2SeqLM import tensorflow as tf import numpy as np @tf.function def test_gradient(inputs): with tf.GradientTape() as tape: led_output = led.call( input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], labels=inputs['labels'], global_attention_mask=inputs['global_attention_mask'] if 'global_attention_mask' in inputs else None, training=True, use_cache=False, return_dict=True, output_hidden_states=True) grads = tape.gradient(led_output['loss'], led.trainable_variables) return led_output @tf.function def test_model(inputs): led_output = led.call( input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], labels=inputs['labels'], global_attention_mask=inputs['global_attention_mask'] if 'global_attention_mask' in inputs else None, training=True, use_cache=False, return_dict=True, output_hidden_states=True) return led_output preloaded_name = 'allenai/led-base-16384' led = TFAutoModelForSeq2SeqLM.from_pretrained(preloaded_name, from_pt=True) """ In this example, we have the following shapes: input_length --> 1800 output_length --> 70 """ inputs = np.load('inputs_with_mask.npy', allow_pickle=True).item() print('Inputs...') for key, value in inputs.items(): print('Key: {0} - Value: {1}'.format(key, value.shape)) """ Prints: input_ids - Value: (1, 1800) attention_mask - Value: (1, 1800) global_attention_mask - Value: (1, 1800) labels - Value: (1, 70) """ # Test with gradient tape # led_output = test_gradient(inputs=inputs) # Test without gradient tape led_output = test_model(inputs=inputs) ``` Running above scripts throws the following error: ```python Traceback (most recent call last): File "/home/fruggeri/Repositories/arg-chatbot/runnables/tests/test_led_tf.py", line 47, in <module> led_output = test_model(inputs=inputs) File "/home/fruggeri/tf2.3/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__ result = self._call(*args, **kwds) File "/home/fruggeri/tf2.3/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 846, in _call return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds) # pylint: disable=protected-access File "/home/fruggeri/tf2.3/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1848, in _filtered_call cancellation_manager=cancellation_manager) File "/home/fruggeri/tf2.3/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1924, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "/home/fruggeri/tf2.3/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 550, in call ctx=ctx) File "/home/fruggeri/tf2.3/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute inputs, attrs, num_outputs) tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [1,2048,12,1045] vs. [1,2048,12,1025] [[node led/encoder/layers.0/self_attn/longformer_self_attn/dropout_1/dropout/Mul_1 (defined at /tf2.3/lib/python3.6/site-packages/transformers/models/led/modeling_tf_led.py:303) ]] [Op:__inference_test_model_27392] Function call stack: test_model ``` 2. Run the torch version of the same script. ```python from transformers import AutoModelForSeq2SeqLM import numpy as np import torch preloaded_name = 'allenai/led-base-16384' led = AutoModelForSeq2SeqLM.from_pretrained(preloaded_name) """ #NOTE: same inputs as in the TensorFlow example! In this example, we have the following shapes: input_length --> 1800 output_length --> 70 """ inputs = np.load('inputs_with_mask.npy', allow_pickle=True).item() inputs = {key: torch.Tensor(value.numpy()).long() for key, value in inputs.items()} print('Inputs...') for key, value in inputs.items(): print('Key: {0} - Value: {1}'.format(key, value.shape)) """ Prints: input_ids - Value: (1, 1800) attention_mask - Value: (1, 1800) global_attention_mask - Value: (1, 1800) labels - Value: (1, 70) """ # Uncomment to set model training mode # led.train() led_output = led( input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], labels=inputs['labels'], global_attention_mask=inputs['global_attention_mask'] if 'global_attention_mask' in inputs else None, use_cache=False, return_dict=True, output_hidden_states=True) # Uncomment to test with gradient # led_output['loss'].backward() # optim = torch.optim.SGD(led.parameters(), lr=1e-2, momentum=0.9) # optim.step() ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Running the TensorFlow script (step 1) throws the mentioned error regarding the self attention block. Interestingly, this error does not occur if we remove ```@tf.function``` from ```def test_model(input):``` function. On the other hand, the PyTorch version does not present any kind of error, even when considering backward pass and gradient computation. For clarity: both ```attention_mask``` and ```global_attention_mask``` are binary masks (integers). Am I doing something wrong in defining the ```global_attention_mask```?. Testing the TensorFlow script (step 1) with ```global_attention_mask=None``` works smoothly without any error, even when doing gradient backpropagation. I've tried changing the ```attention_window_size``` parameter of the model without any particular success (e.g. matching it with my input size), the error is still thrown.
10-06-2021 15:53:35
10-06-2021 15:53:35
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @federicoruggeri, If I remember correctly, TF LED cannot be compiled with `output_attention_mask=True`. This is a difficult bug and I don't think we'll be able to allocate time soon to solve this, I'm afraid :-/ If you'd be willing to open a PR and dig deeper into this, I'm happy to help however!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,909
closed
#12789 Replace assert statements with exceptions
# What does this PR do? Replaces the assertions in utils_qa.py file at examples/tensorflow/question-answering, examples/flax/question-answering and examples/pytorch/question-answering (bound by strict copy check) with ValueError and EnvironmentError exceptions as necessary. Also fixed a typo changing predicitions -> predictions in postprocess_qa_predictions_with_beam_search function. Contributes towards fixing issue [#12789 ](https://github.com/huggingface/transformers/issues/12789) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
10-06-2021 15:46:43
10-06-2021 15:46:43
Can someone please take a look at my PR. The error log mentions a config.json file to be missing in a tmp directory and then goes on to throw 2 assertion errors. But the files where I have made my changes are not mentioned in the logs anywhere. I'm having trouble understanding why my changes may have caused a missing config file. Sorry about the force-push, it was required for a little cleanup.<|||||>Thanks again!
transformers
13,908
closed
How to export to ONNX distilbert with question-answering head
I am having a little bit of trouble exporting to ONNX: `distilbert` with `question-answering` head. ### Code I convert to ONNX according to [docs](https://huggingface.co/transformers/serialization.html#converting-an-onnx-model-using-the-transformers-onnx-package): ```shell # Convert to ONNX !python -m transformers.onnx --model='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es' --feature question-answering 'onnx_exported' ``` But `--feature question-answering ` seems like is not an acceptable feature. Is this expected? ### Versions ``` transformers 4.11.2 Python 3.7.12 ``` ### Inquiry The reason I am doing this is to run pipeline with `onnx` and get extra juice out of it (performance). Is there a better way than this manual export approach?
10-06-2021 15:28:10
10-06-2021 15:28:10
I don't think the `question-answering` head is implemented, cc @michaelbenayoun <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,907
closed
Raise exceptions instead of asserts in utils/download_glue_data
# What does this PR do? The PR addresses issue #12789. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-06-2021 15:27:44
10-06-2021 15:27:44
transformers
13,906
closed
Wrong isinstance for AutoTokenizers with unspecified tokenizer_class
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.0 - Platform: Linux-5.8.0-59-generic-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik ## Information When using a model like https://huggingface.co/bigscience/tr3e-1B3-c4-checkpoints/tree/global_step117000/global_step117000 (since subfolder doesn't work this needs to be cloned), which is a GPT2 model using the T5 tokenizer, and with tokenizer_class=null in the config, instantiating an AutoTokenizer after copying T5 tokenizer files into the directory causes isinstance to show it as a GPT2Tokenizer instead. ## To reproduce Steps to reproduce the behavior: 1. clone the model and checkout the branch 2. copy T5 tokenizer files into model directory 3. call `AutoTokenizer.from_pretrained` using that directory 4. look at `isinstance(tok, transformers.GPT2TokenizerFast)` (returns true) and `isinstance(tok, transformers.T5TokenizerFast)` (returns false) I think what it's doing is using the tokenizer_class field to decide what class to pretend to be, and silently defaulting to the model type default if it's null. This has created some problems with some code I have that checks what tokenizer is being used.
10-06-2021 14:52:20
10-06-2021 14:52:20
Hi @leogao2, thanks for opening an issue! You're correct in your analysis. It first searches for `tokenizer_class`, and if not found, defaults to the tokenizer that comes with the model. The `tokenizer_class` parameter is here for exactly that purpose: to have the `AutoTokenizer` understand that it shouldn't load the default tokenizer, but that it should instead load the tokenizer overridden by the user. Would it be possible to put the `T5Tokenizer` value in the `tokenizer_class` of the model's configuration file?<|||||>> Hi @leogao2, thanks for opening an issue! > > You're correct in your analysis. It first searches for `tokenizer_class`, and if not found, defaults to the tokenizer that comes with the model. Should the search pattern be instead: 1. `config.json`'s `tokenizer_class` 2. check tokenizer files in the folder of the model 3. fallback to model's default tokenizer > The `tokenizer_class` parameter is here for exactly that purpose: to have the `AutoTokenizer` understand that it shouldn't load the default tokenizer, but that it should instead load the tokenizer overridden by the user. > > Would it be possible to put the `T5Tokenizer` value in the `tokenizer_class` of the model's configuration file? We can. We just need to make the Megatron to HF conversion script to become aware of the tokenizer used in training. We are just trying to figure out whether having the correct tokenizer files in the model's folder should be enough or not. Either way works for us. <|||||>That's an interesting question! I think it isn't straightforward. Minus a few features that don't exist in the slow tokenizers, we'd like to have identical behavior between the slow and the fast tokenizers. For slow tokenizers, a lot of different tokenizers actually use the same files: the GPT-2 and RoBERTa tokenizers use the same underlying algorithm, and use the same files, so which one would you load given the files? The same can be said for BERT, DistilBERT, SqueezeBERT, etc. For the fast tokenizers, it isn't straightforward either. The fast tokenizer only requires a single file: the `tokenizer.json` file, which contains the configuration of all components that make up the tokenizer, as well as its vocabulary. However, it isn't aware of what class it should load itself in, as it comes from the `tokenizers` library which doesn't have the notion of a `GPT2Tokenizer` or a `T5Tokenizer`. When you load a `tokenizer.json` file in a tokenizer class, you're essentially creating the tokenizer from everything contained in the `tokenizer.json` file, and the class adds a few sensible defaults, usually linked to the special tokens (unknown, padding, etc). You can try this out yourself by loading the `tokenizer.json` file of the `t5-small` checkpoint in both the `GPT2TokenizerFast` and `T5TokenizerFast`: ```py >>> from transformers import T5TokenizerFast, GPT2TokenizerFast >>> t5 = T5TokenizerFast.from_pretrained("<path_to_folder_containing_file>") file <path_to_folder_containing_file>/config.json not found >>> gpt2 = GPT2TokenizerFast.from_pretrained("<path_to_folder_containing_file>") file <path_to_folder_containing_file>/config.json not found Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. ``` If you encode with the two tokenizers, you should see the same result, as it's the same algorithm behind the scenes: ```py >>> t5("Hey") {'input_ids': [9459, 1], 'attention_mask': [1, 1]} >>> gpt2("Hey") {'input_ids': [9459, 1], 'attention_mask': [1, 1]} ``` However, if you check the defaults of the tokenizer, these should be different: ```py >>> gpt2.unk_token '<|endoftext|>' >>> t5.unk_token '<unk>' ``` --- For the issue you're facing, I would suggest putting the expected class of your tokenizer in the `config.json` or in the `tokenizer_config.json`, which is the configuration of the tokenizer. Both configurations can set this attribute, and the `AutoTokenizer` will start first by checking the `tokenizer_config.json` file for the `tokenizer_class` attribute, before continuing with the `config.json` file, to finally default to the tokenizer linked to the model specified in the `config.json`. Erratum: the ability to infer the tokenizer class from the `tokenizer_config.json` was introduced in version v4.8.0. I'm seeing in your issue description that you're using `v4.5.0`, so I would recommend sticking to `config.json` if you're planning on staying on that version.<|||||>Wow, perhaps this monumental study should be saved in the docs, as it's quite the unravelling. Thank you for that, Lysandre. So based on your recommendation it sounds like **the conversion script should populate `tokenizer_class` in the `config.json`.** I will add that. I think it's safe to close this now. @leogao2, if you feel there is something else to discuss here please don't hesitate to reopen.
transformers
13,905
closed
`min_length` in `generate` method
Hi! I have a question about `generate` method. Model's original eos_token is `</s>` but, I designated eos_token `<sep>` when I use generate method. If min_length is specified, I want to determine the minimum length until <sep> is generated. What can I do?
10-06-2021 14:45:07
10-06-2021 14:45:07
Hi! Could you please post this question on the [forum](https://discuss.huggingface.co/)? We use issues for bug reports and feature requests, for such general questions use the forum. Thank you!<|||||>Ah, OK! Thank you!
transformers
13,904
closed
Error: Attribute does not exist
https://github.com/huggingface/transformers/blob/77770ec79883343d32051cfb6a04f64523cd8df1/src/transformers/models/auto/auto_factory.py#L559 shouldn't this be _model_mapping?
10-06-2021 14:08:04
10-06-2021 14:08:04
Ah, very possibly! cc @sgugger <|||||>Yes, that's a typo. Do you want to suggest a PR to fix it, since you found it?<|||||>Closed by #13942 <|||||>I'm seeing that this has already been fixed. Thank you, this is helping me a lot! For when is the new release scheduled?
transformers
13,903
closed
mBART50 finetuned many-to-many model failed to generate translation
@patrickvonplaten Hello, I downloaded mBART50 finetuned many-to-many model. When generating translations, it just repeats the same word over and over again. This problem is also reported in https://github.com/huggingface/transformers/issues/7060#issuecomment-765648270. I used mBART50 finetuned many-to-many model to do the ar_AR-en_XX translation on the IWSLT17 ar-en test set. No bug occurred during the generation. Also, I have done the multilingual fine-tuning for mBART25 successfully, so I suppose I did the preprocessing correctly. The file that mBART50 fine-tuned N-N generated is put here: https://lotus.kuee.kyoto-u.ac.jp/~zhuoyuanmao/ar_AR-en_XX
10-06-2021 12:31:11
10-06-2021 12:31:11
sorry, I will go to fairseq to ask this question
transformers
13,902
closed
Best model not loaded in Trainer.hyperparameter_search
Hi, I am inspecting the `hyperparameter_search` of `Trainer` class. It does not ignore the `patience` argument, however, it does ignore the `load_best_model_at_end`. This is a bug in my view, since `patience` should be ignored when `load_best_model_at_end` is unavailable. Attaching an example for `patience == 1`, `load_best_model_at_end == True` and `metric_for_best_model == "accuracy"`: <img width="579" alt="Снимок экрана 2021-10-06 в 15 04 04" src="https://user-images.githubusercontent.com/36672861/136198585-a32588b6-ec1c-465f-b47b-5a2249bdbae2.png"> After this the training part is over, however, the reported final value is not the best one: <img width="313" alt="Снимок экрана 2021-10-06 в 15 05 44" src="https://user-images.githubusercontent.com/36672861/136198804-49b8ba60-1876-48b6-9c2b-0dad5ae58b15.png"> I can make the merge request to solve the issue if necessary.
10-06-2021 12:06:10
10-06-2021 12:06:10
Hi @Aktsvigun would mind sharing your notebook or py file. I think it can be fixed without PR?<|||||>@pratikchhapolika sure, please find the notebook attached. In this notebook have a look at run 17 (penultimate cell, penultimate run). On the first epoch, the accuracy equals 0.45, still the "best run" has the accuracy of 0.44. [hyp_trainer_example.txt](https://github.com/huggingface/transformers/files/7295195/hyp_trainer_example.txt) Github does not allow me to attach .ipynb file, hence I changed the extension with .txt. To open it, change its extension from .txt to .ipynb<|||||>Maybe of interest to @sgugger <|||||>Good afternoon! @sgugger any updates on this?<|||||>Yes the hyperparameter search framework ignore the `load_best_model_at_end` argument, and this is not something that can be changed as far as I can tell since they do not accept such an argument for the reporting. `load_best_model_at_end` should just not be used with HP-search, it's meant for standard trainings only.<|||||>See it, thanks! Closing then.
transformers
13,901
closed
Replace assert statements with exceptions (#13871)
# What does this PR do? Replaces the assertions in generation_beam_search.py with ValueError exceptions. I added an error message for the first one that did not exist before, feel free to change it. Contributes towards fixing issue #12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
10-06-2021 10:36:06
10-06-2021 10:36:06
transformers
13,900
closed
Context managers
# What does this PR do? This PR adds a `ContextManagers` class that wraps one, multiple or no context managers and acts as a single context manager as discussed with @sgugger and @stas00. This should reduce code duplications where the context is conditional e.g. can only be applied if a framework is used or flag is set. ## Example The following example should illustrate how the `ContextManagers` class works: ```Python import contextlib @contextlib.contextmanager def context_en(): print('Welcome!') yield print('Bye!') @contextlib.contextmanager def context_fr(): print('Bonjour!') yield print('Au revoir!') ``` We can then either specify no, one, or more contexts: ```Python with ContextManagers([]): print('Transformers are awesome!') print() with ContextManagers([context_en()]): print('Transformers are awesome!') print() with ContextManagers([context_en(), context_fr()]): print('Transformers are awesome!') ``` The context managers will be nested in the `ContextManagers` ```Python >>>Transformers are awesome! >>> >>>Welcome! >>>Transformers are awesome! >>>Bye! >>> >>>Welcome! >>>Bonjour! >>>Transformers are awesome! >>>Au revoir! >>>Bye! ``` ## Use cases With this class the following code snippets can be rewritten: https://github.com/huggingface/transformers/blob/aea7c5b0c8b8d0e03dea2046599f09e16357070f/src/transformers/modeling_utils.py#L767-L773 ```Python contexts = [] if is_deepspeed_zero3_enabled(): import deepspeed contexts.append(deepspeed.zero.GatheredParameters(old_embeddings.weight, modifier_rank=None)) with ContextManagers(contexts): old_num_tokens, old_embedding_dim = old_embeddings.weight.size() ``` Contexts with additional conditional statements are probably best dealt with like that: https://github.com/huggingface/transformers/blob/aea7c5b0c8b8d0e03dea2046599f09e16357070f/src/transformers/modeling_utils.py#L796-L805 ```Python contexts = [] if is_deepspeed_zero3_enabled(): import deepspeed contexts.append(deepspeed.zero.GatheredParameters(old_embeddings.weight, modifier_rank=0)) with ContextManagers(contexts): if not is_deepspeed_zero3_enabled() or torch.distributed.get_rank() == 0: new_embeddings.weight.data[:n, :] = old_embeddings.weight.data[:n, :] ``` Another place where code duplication is even more severe is in upcasting added in #13573: https://github.com/huggingface/transformers/blob/aea7c5b0c8b8d0e03dea2046599f09e16357070f/src/transformers/models/gpt2/modeling_gpt2.py#L243-L251 What do you think about this implementation?
10-06-2021 10:26:17
10-06-2021 10:26:17
Hmm, the PR/implementation itself is awesome. thank you, @lvwerra Looking at before and after examples I'm concerned with whether this doesn't make readability/easy-of-understanding worse than before. Perhaps this appears to be that way since I have been staring at the old way for a long time and the new code looks unfamiliar? I.e. is it the case of habituation bias? Thoughts?<|||||>I think the better example is the code mentioned last, where there is a duplication of three lines. For a duplication of just one line, the benefit is not super visible indeed.<|||||>I'm not at all sure the number of duplicated lines is the issue. With the way things are now the reader can instantly choose the relevant branch using the in-head processor and flow through as they read the code. With the addition of the context manager it is far more difficult to figure out what's going on. And if additional conditions are added as in the 2nd example in OP that makes the code even more difficult to follow. IMHO, this current line of work is mainly to make the maintainability better - e.g. to prevent cases where we fixed one copy of a branch and forgot the other - especially since they aren't even aligned the same. Would the following approach improve maintainability while keeping the readability easy? ``` def run(): q, k = query.reshape(-1, q_seq_len, dk), key.transpose(-1, -2).reshape(-1, dk, k_seq_len) attn_weights = torch.baddbmm(attn_weights, q.float(), k.float(), beta=0, alpha=scale_factor) attn_weights = attn_weights.reshape(bsz, num_heads, q_seq_len, k_seq_len) if is_amp_available: with autocast(enabled=False): run() else: run() ``` before: ``` if is_amp_available: with autocast(enabled=False): q, k = query.reshape(-1, q_seq_len, dk), key.transpose(-1, -2).reshape(-1, dk, k_seq_len) attn_weights = torch.baddbmm(attn_weights, q.float(), k.float(), beta=0, alpha=scale_factor) attn_weights = attn_weights.reshape(bsz, num_heads, q_seq_len, k_seq_len) else: q, k = query.reshape(-1, q_seq_len, dk), key.transpose(-1, -2).reshape(-1, dk, k_seq_len) attn_weights = torch.baddbmm(attn_weights, q.float(), k.float(), beta=0, alpha=scale_factor) attn_weights = attn_weights.reshape(bsz, num_heads, q_seq_len, k_seq_len) ```<|||||>Not sure what's blocking this PR from getting merged @lvwerra ? It's just a tool that people are free to use or not and I personally would really like to use this in the Trainer to simplify all the code using autocast block in an if statement. The particular ```py if is_amp_available: with autocast(enabled=False): some_code else: same_code ``` is present so many that I will create a `autocast_if_amp_is_available` contextmanager using your tools to simply do: ```py with autocast_if_amp_is_available() some_code ```
transformers
13,899
closed
The Pegasus model generator function ignores the temperature parameter.
Loading the model using the from_pretrained method, using the 'tuner007/pegasus_summarizer' weights, and invoking the generate method with different temperatures results in always the same output text being generated regardless of the temperature. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.2 - Platform: Linux-5.4.0-1055-aws-x86_64-with-debian-buster-sid - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.1+cu102 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Models: Pegasus model (using the torch implementation) loading the weights from 'tuner007/pegasus_summarizer' ## To reproduce Steps to reproduce the behavior: 1. Load the model using the from_pretrained method 2. Invoke the generate method with different temperatures. ## Expected behavior Get different output texts <!-- A clear and concise description of what you would expect to happen. -->
10-06-2021 04:29:14
10-06-2021 04:29:14
Could you post a code snippet so we could take a look ? Also `temperature` parameter is for sampling so make sure to enable sampling, you could do this by passing the `do_sample=True` argument to the `generate` method, otherwise `temperature` won't have any effect.<|||||>This is the code (extracted from the url of the author of the model) import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer model_name = 'tuner007/pegasus_summarizer' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) def get_response(input_text, temp): batch = tokenizer([input_text],truncation=True,padding='longest',max_length=1024, return_tensors="pt").to(torch_device) gen_out = model.generate(**batch,max_length=128,num_beams=5, num_return_sequences=1, temperature=temp) output_text = tokenizer.batch_decode(gen_out, skip_special_tokens=True) return output_text Adding the `do_sample=True` did modified the output taking into account the `temperature,` modifying the beam produced a different output without having to sample, Is this the intended behavior?<|||||>if sampling is `True` and `num_beams` is > 1 then it'll do beam sampling. If you just want to do temperature sampling, you could set `num_beams=1`<|||||>What I mean is that in the absence of the `to_sample=True` parameter modifying the `num_beams` changes the output but not changing the `temperature`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,898
closed
Cannot run regression with DebertaV2ForSequenceClassification and DebertaForSequenceClassification
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.2 - Platform: Linux-4.15.0-154-generic-x86_64-with-debian-buster-sid - Python version: 3.7.4 - PyTorch version (GPU?): 1.9.1+cu102 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: tried both CPU and GPU - Using distributed or parallel set-up in script?: tried both ## Information Model I am using (Bert, XLNet ...): DebertaV2ForSequenceClassification, DebertaForSequenceClassification The problem arises when using: * [x] my own modified scripts: (give details below) The tasks I am working on is: Regression with DebertaForSequenceClassification on my own dataset ## To reproduce Steps to reproduce the behavior: Basically use the example script but set num_labels=1 ``` from transformers import DebertaTokenizer, DebertaForSequenceClassification import torch tokenizer = DebertaTokenizer.from_pretrained('microsoft/deberta-base') model = DebertaForSequenceClassification.from_pretrained('microsoft/deberta-base', num_labels=1) inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") labels = torch.tensor([1]).unsqueeze(0) outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits ``` or ``` from transformers import DebertaV2Tokenizer, DebertaV2ForSequenceClassification import torch tokenizer = DebertaV2Tokenizer.from_pretrained('microsoft/deberta-v2-xlarge') model = DebertaV2ForSequenceClassification.from_pretrained('microsoft/deberta-v2-xlarge', num_labels=1) inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") labels = torch.tensor([1]).unsqueeze(0) outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior outputs the regression ## Actual behavior RuntimeError: "mse_cpu" not implemented for 'Long' (CPU) RuntimeError: "mse_cuda" not implemented for 'Long' (GPU) ## Who can help @LysandreJik @sgugger
10-06-2021 03:16:10
10-06-2021 03:16:10
I think this is because the `labels` tensor is of type `long` but the mse loss function expects float tensor. You could try passing a float tensor instead ```python labels = torch.tensor([1], dtype=torch.float).unsqueeze(0) ```<|||||>> I think this is because the `labels` tensor is of type `long` but the mse loss function expects float tensor. You could try passing a float tensor instead > > ```python > labels = torch.tensor([1], dtype=torch.float).unsqueeze(0) > ``` Worked thanks
transformers
13,897
closed
Fix hp search for non sigopt backends
# What does this PR do? This PR fixes the bug introduced in #13572 for the hyperparameter search using Ray Tune or Optuna. More speficially the bug introduced is on [this line](https://github.com/huggingface/transformers/blob/36fc401621297d2f917a0b4df4a0718067966c12/src/transformers/trainer.py#L1241) ```py self.state.trial_params = hp_params(trial.assignments) if trial is not None else None ``` which used to be ```py self.state.trial_params = hp_params(trial) if trial is not None else None ``` For the HP backends different from SigOpt, the `trial` object has no `assignmnents`, hence the failure. Fixes #13875
10-06-2021 01:30:49
10-06-2021 01:30:49
transformers
13,896
closed
Fix trainer logging_nan_inf_filter in torch_xla mode
# Fix trainer logging_nan_inf_filter in torch_xla mode <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> `(torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))` will trigger compilation when running torch_xla. This PR fixes this bug. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone who's familiar with torch_xla. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-06-2021 01:28:24
10-06-2021 01:28:24
transformers
13,895
closed
modeling utils don't respect `subfolder` setting when looking up model
Not sure yet if it's all 3 look ups but we have a repo where the model is in a sub-folder and this fails: ``` $ python -c "from transformers import AutoTokenizer, AutoModel; mname = 'bigscience/tr3d-1B3-oscar-checkpoints'; ckpt='global_step117000'; AutoModel.from_pretrained(mname, revision=ckpt, subfolder=ckpt)" 2021-10-05 17:02:26.222006: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 404 Client Error: Not Found for url: https://huggingface.co/bigscience/tr3d-1B3-oscar-checkpoints/resolve/global_step117000/config.json Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/configuration_utils.py", line 546, in get_config_dict resolved_config_file = cached_path( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/file_utils.py", line 1402, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/file_utils.py", line 1574, in get_from_cache r.raise_for_status() File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/models.py", line 943, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/bigscience/tr3d-1B3-oscar-checkpoints/resolve/global_step117000/config.json During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 396, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/auto/configuration_auto.py", line 527, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/configuration_utils.py", line 570, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for 'bigscience/tr3d-1B3-oscar-checkpoints'. Make sure that: - 'bigscience/tr3d-1B3-oscar-checkpoints' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bigscience/tr3d-1B3-oscar-checkpoints' is the correct path to a directory containing a config.json file - or 'global_step117000' is a valid git identifier (branch name, a tag name, or a commit id) that exists for this model name as listed on its model page on 'https://huggingface.co/models' ``` It's trying to check: https://huggingface.co/bigscience/tr3d-1B3-oscar-checkpoints/resolve/global_step117000/config.json but this is an incorrect path it misses the `subfolder` - the correct path is: https://huggingface.co/bigscience/tr3d-1B3-oscar-checkpoints/resolve/global_step117000/global_step117000/config.json @LysandreJik, @sgugger
10-06-2021 00:07:17
10-06-2021 00:07:17
This is not a bug, the `from_pretrained` method does not inspect subfolders of repos to create a model. In the branch "global_step100500", the model file and its config should be at the root of the repo, not in a `global_step100500` subfolder. Why use branch plus subfolders? It's as if we had a folder for each version of Transformers on top of using git branches.<|||||>Because the model generated checkpoints as: ``` checkpoints/global_step100500 checkpoints/global_step101500 [...] ``` and we remapped each sub-folder to a branch. All the checkpoints were auto-converted from Megatron to HF in one swoop - 200 checkpoints - and then uploaded to hub. ----- > the from_pretrained method does not inspect subfolders of repos to create a model we aren't creating a model, but trying to load it. what is the point of the `subfolder` arg then? any objection to make it support `subfolders` on the hub? Surely, I can easily see a use-case where one would use multiple sub-folders in a single branch and not necessarily use branches. <|||||>I am confused, which `subfolder` argument are you talking about? This is not an argument accepted by the `from_pretrained` method.<|||||>oh, do you mean that `AutoModel.from_pretrained(mname, revision=ckpt, subfolder=ckpt)` just ignores the `subfolder` kwarg? I had someone report to me this an Issue, I assumed there was a `subfolder` arg as they were trying to use it.<|||||>Yes there is no `subfolder` argument. I am not super enthusiastic at the idea of adding it since we already have the branches to deal with versions and one can always clone the repo then use a local path to access subfolders. In your use case, there is clearly an overkill in the creation of your repo with branches and subfolders doing the same thing.<|||||>(cc @julien-c @LysandreJik @patrickvonplaten if we want to open the discussion on that feature)<|||||>OK, so there is `subfolder` in tokenizer https://huggingface.co/transformers/model_doc/auto.html#autotokenizer I had no idea we had it. So it means that the feature is either inconsistent with the other 2 or confusing to the user, or both. As I said I'm just posting an Issue on behalf of someone reporting this problem. The relevant doc entry at that URL is: > subfolder (str, optional): > In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base), specify it here. but I think it may means something else. I'm not sure how to decipher the example - `subfolder` where? is `rag-token-base` the implied subfolder of `facebook`? Probably not, since that's the `model_or_path` arg already. Should the example say: > In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base/special/), specify it here. If it is an actual sub-folder then config's and model's `from_pretrained` should mirror it, since all model files reside together. <|||||>And also didn't we change the code a while back to assert if a `kwargs` is passed but not fully consumed or all keys are accounted for? If not, then we have a problem, since if a user makes a typo in the arg name and there is no assert they could be under a false impression that they are doing something they want when in fact they don't. I run into this with deepspeed config file where they don't validate it and when a typo happens it's usually quite painful since a problem doesn't get noticed early enough, as Deepspeed moves on with defaults. update: indeed: ``` AutoModel.from_pretrained(mname, reviison=ckpt) # dyslexic typist ``` Throws no exception and it definitely should. Should I file a separate Issue about it? <|||||>I think the subfolder for `tokenizers` was added for multi-tokenizers models such as RAG where two tokenizers are required and it's impossible to have both tokenizers at the root since they would overwrite each other.<|||||>I think this `subfolder` argument made sense for complex models such as RAG (and it could also make sense for complex encoder-decoders which don't use the same tokenizer for the encoder and decoder), but I don't see any need for that feature regarding models. @stas00 your point regarding the exception that isn't raised with a wrong argument is correct. This should ideally raise an error.<|||||>Aha! Thank you for the explanation, Patrick! So now a user starting with a tokenizer discovers `subfolder` and thinks the hub will support it for a model and config and they don't and the don't in a silent way, emitting a misleading error message. So I propose that we either: 1. support `subfolder` for config/model `from_pretrained` so that the 3 are in sync. 2. fix `from_pretrained` to validate its `kwargs` and assert on unexpected ones I think we have to do the later sooner or later regardless of the decision wrt `subfolder` It's totally fine if we don't support `subfolder` for BigScience - we will just have to re-work how we do things.<|||||>Most people aren't aware of it and it's really only for special cases - we maybe shouldn't even have added it. Overall I'm also not super exciting about a subfolder argument for models, wouldn't it be enough to use the branching system for that, like we do for GPT-J: https://huggingface.co/EleutherAI/gpt-j-6B Is the problem here that switching between branches takes a ridiculous amount of time? Overall, I think it's quite important that we make the hub as user-friendly as possible not just for after the model has been trained, but also for during training - so happy to discuss this a bit more!<|||||>> Aha! Thank you for the explanation, Patrick! > > So now a user starting with a tokenizer discovers `subfolder` and thinks the hub will support it for a model and config and they don't and the don't in a silent way, emitting a misleading error message. > > So I propose that we either: > > 1. support `subfolder` for config/model `from_pretrained` so that the 3 are in sync. > 2. fix `from_pretrained` to validate its `kwargs` and assert on unexpected ones > > I think we have to do the later sooner or later regardless of the decision wrt `subfolder` > > It's totally fine if we don't support `subfolder` for BigScience - we will just have to re-work how we do things. 2) We should definitely do this! Regarding 1.) - maybe we can also deprecate `subfolders` or make it private...<|||||>OK, I split off a dedicated issue to validate kwargs https://github.com/huggingface/transformers/issues/13912 We can close this one - unless you want to discuss removing support to `subfolder` for the tokenizer here. Thank you for the commentary.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,894
closed
fix: replace asserts by value error
# What does this PR do? The PR addresses issue #12789. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - @willfrey - @LysandreJik - Anyone in the community is free to review the PR once the tests have passed.
10-05-2021 21:55:19
10-05-2021 21:55:19
transformers
13,893
closed
Inference of ONNX exported BART
As I understand correctly, one should use generate function for Seq2Seq model inference. I have exported BART to ONNX, but ONNX InferenceSession().run() returns result of BART.forward(), which is different from BART.generate(). Is there any way to get the BART.generate() from ONNX? Thanks.
10-05-2021 21:48:19
10-05-2021 21:48:19
The `generate` method in ONNX isn't available yet. It is on the roadmap, but it is a big item that will require quite a bit of time to solve. You can follow the progress here: https://github.com/huggingface/transformers/pull/13578<|||||>Thanks for the response. Just a small clarification, in the pull request you mentioned is handling of beam search, if I want to use greedy decoding, I still cant have it for ONNX BART right?<|||||>Nope, for now you need to implement your own greedy search or beam search for generating.
transformers
13,892
closed
Update parallelism.md
Modifying flexflow part. ### reviewers @stas00
10-05-2021 21:46:24
10-05-2021 21:46:24
all is good!
transformers
13,891
closed
minimal fixes to run DataCollatorForWholeWordMask with return_tensors="np" and return_tensors="tf"
# What does this PR do? This PR addresses https://github.com/huggingface/transformers/issues/13890 with the minimal fixes to run `DataCollatorForWholeWordMask` with `return_tensors="np"` and `return_tensors="tf"` Specific problems addressed: * Renamed `np_call` -> `numpy_call` * Call `_numpy_collate_batch` instead of` _tf_collate_batch` when returning numpy tensors * Fix size of `random_words` in `numpy_mask_tokens` * Clone tensorflow tensors with `tf.identity(tensor)` * Use tensorflow tensors’ built-in iteration instead of attempting to convert to list * Change calls to `tf.convert_to_tensor` with `dtype=tf.bool` to `tf.cast` with `dtype=tf.bool` I've only added a simple test to check for regressions vs all of the padded/unpadded cases as is done for `DataCollatorForLanguageModeling` Fixes https://github.com/huggingface/transformers/issues/13890 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. CC @Rocketknight1 (looks like you did the initial Numpy/TF implementation of these data collators)
10-05-2021 21:00:23
10-05-2021 21:00:23
Bump @Rocketknight1 @LysandreJik @sgugger since `transformers` releases frequently and it would be nice to have this functionality. This gives a vetted source for masked language modeling that is not https://github.com/google-research/bert/blob/master/create_pretraining_data.py or https://github.com/tensorflow/models/blob/master/official/nlp/data/create_pretraining_data.py in a package many people are going to be using for modeling (e.g., I could delete my code that is a copy/paste of the former and integrate against this instead for creating a large offline dataset).<|||||>Sorry for the delay, and thank you for the ping! This looks good to me. I've got one question about the `convert_to_tensor` versus `cast` call, but other than that this is a solid and necessary PR.
transformers
13,890
closed
DataCollatorForWholeWordMask does not work with return_tensors = "np" and return_tensors = "tf"
## Environment info - `transformers` version: 4.11.2 - Platform: Linux-4.19.0-14-cloud-amd64-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyTorch version (GPU?): 1.9.1+cu102 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @Rocketknight1 ## Information The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I'd like to use `DataCollatorForWholeWordMask` for masked language modeling in TensorFlow. ## To reproduce ##### `return_tensors = "np"` ``` from transformers.data import DataCollatorForWholeWordMask from transformers import BertTokenizerFast return_tensors = "np" tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") collate_fn = DataCollatorForWholeWordMask(tokenizer, return_tensors=return_tensors, mlm=True) collate_fn([{"input_ids": list(range(10))}, {"input_ids": list(range(10))}]) ``` ```TypeError: numpy_mask_tokens() got an unexpected keyword argument 'special_tokens_mask'``` This is at least partly due to not implementing `DataCollatorMixin.numpy_call` in `DataCollatorForWholeWordMask` (`np_call` is implemented instead). There are also some issues with using TensorFlow tensors in the `np_call` function. ##### `return_tensors = "tf"` ``` from transformers.data import DataCollatorForWholeWordMask from transformers import BertTokenizerFast return_tensors = "tf" tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") collate_fn = DataCollatorForWholeWordMask(tokenizer, return_tensors=return_tensors, mlm=True) collate_fn([{"input_ids": list(range(10))}, {"input_ids": list(range(10))}]) ``` ```AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'clone'``` `inputs.clone()` appears to have been copied from `torch_call` which is not valid in TensorFlow.
10-05-2021 20:03:08
10-05-2021 20:03:08
transformers
13,889
closed
Update Kwargs for EncoderDecoderModel
# What does this PR do? EncoderDecoderModel can't get additional parameters for customized models. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) https://github.com/huggingface/transformers/issues/13887 Below, that function gets Kwargs for dynamic parameters, ``` def prepare_inputs_for_generation( self, input_ids, past=None, attention_mask=None, use_cache=None, encoder_outputs=None, **kwargs ): ``` but it doesn't reflect actually. I modified this function to ``` decoder_inputs = self.decoder.prepare_inputs_for_generation(input_ids, past=past) decoder_attention_mask = decoder_inputs["attention_mask"] if "attention_mask" in decoder_inputs else None input_dict = { "attention_mask": attention_mask, "decoder_attention_mask": decoder_attention_mask, "decoder_input_ids": decoder_inputs["input_ids"], "encoder_outputs": encoder_outputs, "past_key_values": decoder_inputs["past_key_values"], "use_cache": use_cache, } return input_dict ``` ## Before submitting - [] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik, @patrickvonplaten, @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-05-2021 19:51:10
10-05-2021 19:51:10
Update use_default_keys=True.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,888
closed
fix(integrations): consider test metrics
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Consider test metrics when rewriting metric names prior to logging. <!-- Remove if not applicable --> Fixes #13881 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
10-05-2021 19:45:34
10-05-2021 19:45:34
Thanks for reporting this issue @bablf (and for creating the fix). You can check this PR on your own example to see if it fixes it.
transformers
13,887
closed
Encoder Decoder Model Generation Issue
Thank you for your great works! I'm customizing Enc-Dec model, but it can't get additional parameters. How about modifying from below, https://github.com/huggingface/transformers/blob/0ddadbf0a82c3d9f3df864e1302e3f04e25c2967/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L471 to ``` decoder_inputs = self.decoder.prepare_inputs_for_generation(input_ids, past=past, **kwargs) ```
10-05-2021 19:29:57
10-05-2021 19:29:57
Hi! It should be possible to do that, but could please explain the issue in more detail so we could better understand it? Thanks!<|||||>@patil-suraj @patrickvonplaten I did pull request https://github.com/huggingface/transformers/pull/13889, it's described below. When implementing [Pointer Network], it needs to get tokens of the source text. But that returns only default keys(e. g. decoder_input_ids). And it's bound "prepare_inputs_for_generation" by "generation_utils", so it can't add other parameters. So i add parameter "use_default_keys". Then variable "input_ids" can be extended from each language model head's "prepare_inputs_for_generation" modefied by users. Let's say, if using Bert2Bert model implementation of below, it can be getting "decoder_src_input_ids" on decoding when use **kwargs in parent function of "prepare_inputs_for_generation". ``` def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, **kwargs): input_shape = input_ids.shape # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly if attention_mask is None: attention_mask = input_ids.new_ones(input_shape) # cut decoder_input_ids if past is used if past is not None: input_ids = input_ids[:, -1:] src_input_ids = kwargs.get('decoder_src_input_ids', None) return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past, "decoder_src_input_ids": src_input_ids} ``` [Pointer Network]: https://arxiv.org/abs/1704.04368 Thank you for reply!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing due to inactivity on the pull request
transformers
13,886
closed
Fixed horizon_length for PPLM
# What does this PR do? The current code can only run at horizon_length=1. However, It's not applicable if we want to run at horizon_length > 1 since the curr_probs is not updated in the for_loop. This PR is to update the curr_probs in the horizon_length in the for_loop. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
10-05-2021 18:59:59
10-05-2021 18:59:59
@sgugger sorry for bothering you, could you help me review my PR? Hope to see your review, thanks so much!<|||||>Hi @jacksukk, thanks for your PR but as mentioned in the [README](https://github.com/huggingface/transformers/tree/master/examples/research_projects) of the research projects, you need to tag the author of the project. We don't maintain that code ourselves.<|||||>Hi @sgugger, Thanks a lot for your reply! QQ Let me tag @dathath, could you help to review my code? Sorry for bothering and hope to see your comment. Thanks!<|||||>Hi @jacksukk, Thanks for the ping! I've seen this, and will review shortly.<|||||>Thanks a lot both!<|||||>Thanks a lot! I'm a big fan of both huggingface and PPLM. I'm delighted to have some commit here!
transformers
13,885
closed
pre-training roberta from scratch
Hi, I am currently pre-training RoBERTa model from scratch based on the tutorial in here: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb and I got 3 questions. 1) I see that the data used in this tutorial is separated line by line and the train and test data is put into LineByLineTextDataset function before training. What is the usage of this function ? I couldn't find an explanation of it in Huggingface documents. Even though I have guesses, I would like to know better. As I see the data in the tutorial is already split line by line. So, what would have been lost if these data are not put LineByLineTextDataset function before starting the training process ? 2) The data I will use in pre-training process is not split line by line. In my data, each example have multiple sentences. So, I assume that I shouldn't use LineByLineTextDataset function. Is there any other function I can use instead ? Or would it be wrong if I would join the whole dataset in one very long text and then split it into sentences and then use LineByLineTextDataset function ? 3) This will be a little bit technical question. In RoBERTa paper, it was written that the size of the data they used for pre-training process is 160 GB. If I would pre-train the model with 10-20 GB of data, would this decrease the performance of the pre-trained model drastically or is it okay to use as long as the training arguments are set accordingly ?
10-05-2021 18:05:26
10-05-2021 18:05:26
Hi @ozyurtf ! Thank you for opening the issue. It would be nice if you could ask such general questions on the [forum](https://discuss.huggingface.co/). We use issues for bug reports and feature requests.<|||||>Okay, I will. Thank you Suraj!
transformers
13,884
closed
Autodocument the list of ONNX-supported models
# What does this PR do? This PR automatically updates the list in the serialization doc page when running `make fix-copies` and ensures it is up to date in the quality checks. cc @NielsRogge since you asked for it :-)
10-05-2021 18:05:12
10-05-2021 18:05:12
Awesome, thanks Sylvain!
transformers
13,883
closed
Fix typo in README.md
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fix typo in README.md ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-05-2021 17:36:23
10-05-2021 17:36:23
transformers
13,882
closed
Fix LED
The hidden states and last states returned by the model were incorrect in both TF and PT, so this hopefully resolves everything. In addition, this PR disables LayerDrop in the TF model, as its implementation would not work in any model compiled with Keras or `tf.function` Resolves #13776
10-05-2021 16:31:21
10-05-2021 16:31:21
Hmm, it seems like some Torch tests are now failing because they expect the longer padded output length. Not sure whether this is a problem with my fix or with the tests<|||||>@Rocketknight1 - Thanks a lot for the fix! Some generation tests are failing...if you want I can dive into the PR and fix them - lemme know :-)
transformers
13,881
closed
"test" prefix for rewrite_logs() function of WandbCallback
# 🚀 Feature request I would like it if the function `rewrite_logs()` of WandbCallback() would be compatible with the trainers `.predict()` function. ## Motivation Currently the `rewrite_logs()` function looks like this: ```python def rewrite_logs(d): new_d = {} eval_prefix = "eval_" eval_prefix_len = len(eval_prefix) for k, v in d.items(): if k.startswith(eval_prefix): new_d["eval/" + k[eval_prefix_len:]] = v else: new_d["train/" + k] = v return new_d ``` but `.predict()` has an argument which is called `metric_key_prefix: str = "test"` and sets the prefix to `test`, This leads to the kind of ugly metric name `train/test_f1` in wandb. Better would be `test/f1` ## Your contribution This should do the trick? ```python def rewrite_logs(d): new_d = {} eval_prefix = "eval_" eval_prefix_len = len(eval_prefix) test_prefix = "test_" test_prefix_len = len(test_prefix) for k, v in d.items(): if k.startswith(eval_prefix): new_d["eval/" + k[eval_prefix_len:]] = v elif k.startswith(test_prefix): new_d["test/" + k[test_prefix_len:]] = v else: new_d["train/" + k] = v return new_d ```
10-05-2021 16:14:54
10-05-2021 16:14:54
transformers
13,880
closed
GPT-J float16 model output stopping after first word
## Environment info - `transformers` version: 4.11.2 - Platform: Linux-5.4.0-1045-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help Possibly @StellaAthena? ## Information Model I am using (Bert, XLNet ...): [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B) @ float16 The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce The task I am working on is contextual question answering. The model seems to respond correctly to questions without a context, however the output will stop after the first word when a context is present. Snippet to reproduce the behaviour: ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_fp16 = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16).to('cuda') tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") prompt = """Please answer the question according to the above context. === Context: The United Kingdom of Great Britain and Northern Ireland, commonly known as the United Kingdom (UK) or Britain, is a sovereign country in north-western Europe, off the north-western coast of the European mainland. The United Kingdom includes the island of Great Britain, the north-eastern part of the island of Ireland, and many smaller islands within the British Isles. Northern Ireland shares a land border with the Republic of Ireland. Otherwise, the United Kingdom is surrounded by the Atlantic Ocean, with the North Sea to the east, the English Channel to the south and the Celtic Sea to the south-west, giving it the 12th-longest coastline in the world. The Irish Sea separates Great Britain and Ireland. The total area of the United Kingdom is 93,628 square miles. === Q: What surrounds the UK? A: Atlantic Ocean; North Sea; English Channel; Celtic Sea Q: What does the UK include? A:""" input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to('cuda') gen_tokens = model_fp16.generate(input_ids, do_sample=True, top_p=1.0, temperature=0.00001, max_length=100) result = tokenizer.batch_decode(gen_tokens)[0] completion = result[len(prompt):] if '\n' in completion: # output first row only completion = completion[:completion.index('\n')] print(completion.strip()) ``` ## Expected behaviour The above snippet will output only the first word: `Great` instead of the expected `Great Britain and Northern Ireland` (as it happens with the float32 model, which can be also seen live at https://6b.eleuther.ai/). Removing the context by replacing `prompt` with the following value makes the model output a full phrase. ```python prompt = """Q: What surrounds the UK? A: Atlantic Ocean; North Sea; English Channel; Celtic Sea Q: What does the UK include? A:""" ``` Output: `England, Scotland, Wales, Northern Ireland, Isle of Man, Channel Islands` I have considered the chance that this might be a limitation of the float16 model, however the fact that first words are guessed correctly makes me think the output is being stopped prematurely somewhere in the code.
10-05-2021 15:43:10
10-05-2021 15:43:10
Hi! This is because the`max_length` argument specifies the total length including the length of prompt tokens and here the length of prompt tokens is 209, which is more than `max_length` hence only one token is generated. If you instead want to specify how many new tokens to generate then use the `max_new_tokens` argument instead of `max_length`. It specifies the maximum numbers of tokens to generate, ignore the current number of tokens.<|||||>Hi @patil-suraj and thank you, I managed to solve it by specifying both parameters. Using only `max_new_tokens` did not work. ```python gen_tokens = model_fp16.generate(input_ids, do_sample=True, top_p=1.0, temperature=0.00001, max_new_tokens=100, max_length=len(input_ids[0])+100) ``` I think the feedback can be further improved: - If with my old parameters I was already beyond the maximum, it should have returned 0 tokens rather than 1. - The first time both parameters are used together, a warning is shown: `/home/ubuntu/.local/lib/python3.8/site-packages/transformers/generation_utils.py:874: UserWarning: Both max_length and max_new_tokens have been set but they serve the same purpose.`, which sounds like discouraging the practice. But as I said, both had to be used in order to retrieve more than 1 token in my example.<|||||>Thank you for reporting this, this is confusing indeed. What is happening is, when we don't pass `max_length` it is retrieved from `model.config.max_length` and both `max_length` and `max_new_tokens` are used for stopping criteria. https://github.com/huggingface/transformers/blob/aea7c5b0c8b8d0e03dea2046599f09e16357070f/src/transformers/generation_utils.py#L978-L980 And here since `max_length` is already reached, the generation stops before `max_new_tokens`. Only one of these arguments should be used by stopping criteria. cc @patrickvonplaten @Narsil IMO `max_new_tokens` if passed should take preference over `max_length`, so maybe we could set `max_length=None` when `max_new_tokens` is passed.<|||||>Is there a reason for defining `max_length` within the config ? Or for setting it that low ? Currently there's a warning being displayed when both are defined: https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L872 Making `max_new_tokens` override `max_length` is doable, but IMO it will lead to confusion later on (as clearly `max_length` has been here longer and is more known even though a bit less practical). And if some script is already defining `max_length` in the wild and we start cutting it, it might lead to bad things ? We could attempt to use the longest, but again I am uncertain that it's the correct call (just like the shortest is undesirable in this case because it's too short, taking the longest might just lead to super long generations) Currently I am unsure why the config sets a hard limit on `max_length` that is smaller than `model_max_length` anyway tbh. `GPT-J` is a newcomer so maybe changing its config is the minimal change for this to happen ?<|||||>>Is there a reason for defining max_length within the config ? Or for setting it that low ? It's defined for some seq2seq models like bart-cnn which uses values from the original implementation. It's not defined in the config for auto-regressive models. But the issue is `max_length` is set to a default value of 20 in `PretrainedConfig` https://github.com/huggingface/transformers/blob/5be59a364961a8e2fc986f1276cba977db87512a/src/transformers/configuration_utils.py#L256 So `max_length` is always defined even if it's not in the `config`, which is the case here. And this way `max_new_tokens` is never taken into account if it's more than 20. >Making max_new_tokens override max_length is doable, but IMO it will lead to confusion later on (as clearly max_length has been here longer and is more known even though a bit less practical). And if some script is already defining max_length in the wild and we start cutting it, it might lead to bad things? I agree. But `max_new_tokens` is a newly added argument and is not used much and my guess is most existing scripts still use `max_length` so it's overriding might not cause an issue, but I could be wrong, curious to hear what you think. Also, If it's not overridden `max_new_tokens` has no effect because the default value of `max_length` is very small, which also leads to confusion.<|||||>If `max_new_tokens` is passed with a `non-None` value it should overwrite `max_length` if `max_length` is not passed as a argument to `generate()`. This only affects people that actually pass `max_new_tokens` - so this is indeed a bug! Will make a PR to fix it!<|||||>I'm having the same issue, and installing from the linked PR's branch allows `max_new_tokens` to work for me without any warnings.
transformers
13,879
closed
[Feature] Add an option to use custom pad token_id in `tokenizer.pad()`
# 🚀 Feature request Adds an option to use custom pad token_id in `tokenizer.pad()`. If this option is not specified, using tokenizer-level `pad_token`. Here is an example below:: ``` input = {"input_ids": [[5, 2], [3]]} padded = tokenizer.pad(input, pad_id=-100) #Output: {'input_ids': [[5, 2], [3, -100]], 'attention_mask': [[1, 1], [1, 0]]} ``` ## Motivation Since `tokenizer.pad()` is a public function that can be used not only for regular tokenization but padding existing token ids, adding this option can enable additional use cases. For example, the `labels` of `BertForMaskedLM` accepts `-100` as the masked loss position, and we can thereby use the custom token_id to pad positions we don't want to calculate. ## Your contribution I can open a PR for it. cc @LysandreJik
10-05-2021 15:07:48
10-05-2021 15:07:48
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,878
closed
Model isn't saved on "save_steps"
I am trying to train a T5 model on Colab Using examples, I run the following code: ``` !python run_t5_mlm_flax.py \ --output_dir="/content/drive/MyDrive/Pouramini/" \ --model_type="t5" \ --config_name="/content/drive/MyDrive/Pouramini/" \ --tokenizer_name="/content/drive/MyDrive/Pouramini/" \ --dataset_name="oscar" \ --dataset_config_name="unshuffled_deduplicated_fa" \ --max_seq_length="512" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="64" \ --adafactor \ --learning_rate="0.005" \ --weight_decay="0.001" \ --warmup_steps="2000" \ --overwrite_output_dir \ --logging_steps="500" \ --save_steps="100" \ --eval_steps="100" ``` Since I want to be able to resume the model train, I set `save_steps` on 100. However, while the progress bar shows step ``` 7% 510/7794 [1:11:36<24:59:20, 12.35s/ba] ``` There is nothing in my `output_directory` except for config and tokenizer json files, which existed before training.
10-05-2021 14:40:57
10-05-2021 14:40:57
I also set `--max_steps="150" ` to see if it saves the model after 150 steps, but nothing happened and it's still running in step 300!!<|||||>Thank you for reporting this! I will take a look.<|||||>@patil-suraj Okay, it's now training. In fact, it was in pre-processing stages for dataset and I though it's training. Now it saves the model on "save_steps".
transformers
13,877
closed
[Speech Examples] Add pytorch speech pretraining
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds a Wav2Vec2 pretraining example for PyTorch. Some experiments are still running, but it would be great if the PR could already be reviewed. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-05-2021 14:14:00
10-05-2021 14:14:00
Merge after: https://github.com/huggingface/transformers/pull/13961<|||||>question: why the other examples were not ported over? The deepspeed tests use `wav2vec2/run_asr.py` so I'm not quite sure how to do the transition. This is one of the most non-standard models so it'd be great to be able to continue running deepspeed tests.
transformers
13,876
closed
Fix list index out of range when padding nested empty lists
# What does this PR do? This PR fixes `list index out of range` error when nested empty lists are supplied. ``` from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") padded = tokenizer.pad({"input_ids": [[], [], []] }) print(padded) ``` ``` Traceback (most recent call last): File "test.py", line 6, in <module> padded = tokenizer.pad({"input_ids": [[], [], []] }) File "/src/transformers/tokenization_utils_base.py", line 2730, in pad while len(required_input[index]) == 0: IndexError: list index out of range ``` @LysandreJik
10-05-2021 13:44:39
10-05-2021 13:44:39
What is the use-case of passing nested empty lists like this?<|||||>> What is the use-case of passing nested empty lists like this? I'm using `tokenizer.pad` to pad question answering labels like the below snippet, ```python # labels may look like this, where it should be padded to (batch_size, max_answers), # and sometimes the entire batch might have no answer being like this: `[[], [], []]`, so I proposed this PR. start_position = [ [3, 5, 10], # 3 answers [4], # 1 answer [], # no answer ] end_position = [ [4, 7, 14], [8], [], ] ``` Also, could you checkout this issue #13879 and see if that makes sense to you? (similar use-case as this PR.) Thanks a lot!<|||||>I would appreciate it greatly if you could take a look at this when you're back @SaulLu!
transformers
13,875
closed
Hyperparameter tuning example code is not working either with Ray or Optuna backend.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.2 - Platform: Google Colab - Python version: 3.7.12 - PyTorch version (GPU?): 1.9.0+cu102 - Tensorflow version (GPU?): 2.6.0 - Using GPU in script?: NO - Using distributed or parallel set-up in script?: YES ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Library: - ray/raytune: @richardliaw, @amogkam - trainer: @sgugger Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. --> @richardliaw, @amogkam @sgugger ## Information The code for the hyperparameter search shared on https://github.com/huggingface/blog/blob/master/ray-tune.md does not work anymore. I tried the code on Colab Notebook: https://colab.research.google.com/drive/1KsoCV6z6eD3FqE5HbVwlOqejFyppapxS?usp=sharing The problem arises when using: * [x] the official example scripts: https://github.com/huggingface/blog/blob/master/ray-tune.md * [ ] my own modified scripts: The tasks I am working on is: * [x] an official GLUE/SQUaD task: GLUE * [ ] my own task or dataset: ## To reproduce Steps to reproduce the behavior: Follow the steps for the ray tune tutorial: https://github.com/huggingface/blog/blob/master/ray-tune.md Error with Ray backend: ``` <Crop some error code that is identical to the part below> == Status == Memory usage on this node: 2.7/12.7 GiB Using FIFO scheduling algorithm. Resources requested: 1.0/2 CPUs, 0/0 GPUs, 0.0/7.36 GiB heap, 0.0/3.68 GiB objects Result logdir: /root/ray_results/_objective_2021-10-05_11-51-07 Number of trials: 10/10 (9 ERROR, 1 RUNNING) +------------------------+----------+-------+-----------------+--------------------+-------------------------------+----------+ | Trial name | status | loc | learning_rate | num_train_epochs | per_device_train_batch_size | seed | |------------------------+----------+-------+-----------------+--------------------+-------------------------------+----------| | _objective_86e23_00009 | RUNNING | | 7.96157e-06 | 2 | 32 | 38.0065 | | _objective_86e23_00000 | ERROR | | 5.61152e-06 | 5 | 64 | 8.15396 | | _objective_86e23_00001 | ERROR | | 1.56207e-05 | 2 | 16 | 7.08379 | | _objective_86e23_00002 | ERROR | | 8.28892e-06 | 5 | 16 | 24.4435 | | _objective_86e23_00003 | ERROR | | 1.09943e-06 | 2 | 8 | 29.158 | | _objective_86e23_00004 | ERROR | | 2.3102e-06 | 5 | 8 | 25.0818 | | _objective_86e23_00005 | ERROR | | 1.12076e-05 | 4 | 16 | 1.89943 | | _objective_86e23_00006 | ERROR | | 1.67381e-05 | 2 | 32 | 2.81996 | | _objective_86e23_00007 | ERROR | | 5.4041e-06 | 3 | 32 | 15.916 | | _objective_86e23_00008 | ERROR | | 1.53049e-05 | 3 | 64 | 34.5377 | +------------------------+----------+-------+-----------------+--------------------+-------------------------------+----------+ Number of errored trials: 9 +------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Trial name | # failures | error file | |------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | _objective_86e23_00000 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00000_0_learning_rate=5.6115e-06,num_train_epochs=5,per_device_train_batch_size=64,seed=8.154_2021-10-05_11-51-07/error.txt | | _objective_86e23_00001 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00001_1_learning_rate=1.5621e-05,num_train_epochs=2,per_device_train_batch_size=16,seed=7.0838_2021-10-05_11-51-08/error.txt | | _objective_86e23_00002 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00002_2_learning_rate=8.2889e-06,num_train_epochs=5,per_device_train_batch_size=16,seed=24.443_2021-10-05_11-51-09/error.txt | | _objective_86e23_00003 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00003_3_learning_rate=1.0994e-06,num_train_epochs=2,per_device_train_batch_size=8,seed=29.158_2021-10-05_11-51-17/error.txt | | _objective_86e23_00004 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00004_4_learning_rate=2.3102e-06,num_train_epochs=5,per_device_train_batch_size=8,seed=25.082_2021-10-05_11-51-17/error.txt | | _objective_86e23_00005 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00005_5_learning_rate=1.1208e-05,num_train_epochs=4,per_device_train_batch_size=16,seed=1.8994_2021-10-05_11-51-25/error.txt | | _objective_86e23_00006 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00006_6_learning_rate=1.6738e-05,num_train_epochs=2,per_device_train_batch_size=32,seed=2.82_2021-10-05_11-51-26/error.txt | | _objective_86e23_00007 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00007_7_learning_rate=5.4041e-06,num_train_epochs=3,per_device_train_batch_size=32,seed=15.916_2021-10-05_11-51-34/error.txt | | _objective_86e23_00008 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00008_8_learning_rate=1.5305e-05,num_train_epochs=3,per_device_train_batch_size=64,seed=34.538_2021-10-05_11-51-35/error.txt | +------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2021-10-05 11:51:52,275 ERROR trial_runner.py:773 -- Trial _objective_86e23_00009: Error processing event. Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/ray/tune/trial_runner.py", line 739, in _process_trial results = self.trial_executor.fetch_result(trial) File "/usr/local/lib/python3.7/dist-packages/ray/tune/ray_trial_executor.py", line 746, in fetch_result result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT) File "/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py", line 82, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/ray/worker.py", line 1621, in get raise value.as_instanceof_cause() ray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=823, ip=172.28.0.2, repr=<ray.tune.function_runner.ImplicitFunc object at 0x7f6b715d4990>) File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 178, in train_buffered result = self.train() File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 237, in train result = self.step() File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 379, in step self._report_thread_runner_error(block=True) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 527, in _report_thread_runner_error ("Trial raised an exception. Traceback:\n{}".format(err_tb_str) ray.tune.error.TuneError: Trial raised an exception. Traceback: ray::ImplicitFunc.train_buffered() (pid=823, ip=172.28.0.2, repr=<ray.tune.function_runner.ImplicitFunc object at 0x7f6b715d4990>) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run self._entrypoint() File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint self._status_reporter.get_checkpoint()) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func output = fn() File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 282, in dynamic_modules_import_trainable return trainable(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner trainable(config, **fn_kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 183, in _objective local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1241, in train self.state.trial_params = hp_params(trial.assignments) if trial is not None else None AttributeError: 'dict' object has no attribute 'assignments' Result for _objective_86e23_00009: {} == Status == Memory usage on this node: 2.2/12.7 GiB Using FIFO scheduling algorithm. Resources requested: 0/2 CPUs, 0/0 GPUs, 0.0/7.36 GiB heap, 0.0/3.68 GiB objects Result logdir: /root/ray_results/_objective_2021-10-05_11-51-07 Number of trials: 10/10 (10 ERROR) +------------------------+----------+-------+-----------------+--------------------+-------------------------------+----------+ | Trial name | status | loc | learning_rate | num_train_epochs | per_device_train_batch_size | seed | |------------------------+----------+-------+-----------------+--------------------+-------------------------------+----------| | _objective_86e23_00000 | ERROR | | 5.61152e-06 | 5 | 64 | 8.15396 | | _objective_86e23_00001 | ERROR | | 1.56207e-05 | 2 | 16 | 7.08379 | | _objective_86e23_00002 | ERROR | | 8.28892e-06 | 5 | 16 | 24.4435 | | _objective_86e23_00003 | ERROR | | 1.09943e-06 | 2 | 8 | 29.158 | | _objective_86e23_00004 | ERROR | | 2.3102e-06 | 5 | 8 | 25.0818 | | _objective_86e23_00005 | ERROR | | 1.12076e-05 | 4 | 16 | 1.89943 | | _objective_86e23_00006 | ERROR | | 1.67381e-05 | 2 | 32 | 2.81996 | | _objective_86e23_00007 | ERROR | | 5.4041e-06 | 3 | 32 | 15.916 | | _objective_86e23_00008 | ERROR | | 1.53049e-05 | 3 | 64 | 34.5377 | | _objective_86e23_00009 | ERROR | | 7.96157e-06 | 2 | 32 | 38.0065 | +------------------------+----------+-------+-----------------+--------------------+-------------------------------+----------+ Number of errored trials: 10 +------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Trial name | # failures | error file | |------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | _objective_86e23_00000 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00000_0_learning_rate=5.6115e-06,num_train_epochs=5,per_device_train_batch_size=64,seed=8.154_2021-10-05_11-51-07/error.txt | | _objective_86e23_00001 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00001_1_learning_rate=1.5621e-05,num_train_epochs=2,per_device_train_batch_size=16,seed=7.0838_2021-10-05_11-51-08/error.txt | | _objective_86e23_00002 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00002_2_learning_rate=8.2889e-06,num_train_epochs=5,per_device_train_batch_size=16,seed=24.443_2021-10-05_11-51-09/error.txt | | _objective_86e23_00003 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00003_3_learning_rate=1.0994e-06,num_train_epochs=2,per_device_train_batch_size=8,seed=29.158_2021-10-05_11-51-17/error.txt | | _objective_86e23_00004 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00004_4_learning_rate=2.3102e-06,num_train_epochs=5,per_device_train_batch_size=8,seed=25.082_2021-10-05_11-51-17/error.txt | | _objective_86e23_00005 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00005_5_learning_rate=1.1208e-05,num_train_epochs=4,per_device_train_batch_size=16,seed=1.8994_2021-10-05_11-51-25/error.txt | | _objective_86e23_00006 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00006_6_learning_rate=1.6738e-05,num_train_epochs=2,per_device_train_batch_size=32,seed=2.82_2021-10-05_11-51-26/error.txt | | _objective_86e23_00007 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00007_7_learning_rate=5.4041e-06,num_train_epochs=3,per_device_train_batch_size=32,seed=15.916_2021-10-05_11-51-34/error.txt | | _objective_86e23_00008 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00008_8_learning_rate=1.5305e-05,num_train_epochs=3,per_device_train_batch_size=64,seed=34.538_2021-10-05_11-51-35/error.txt | | _objective_86e23_00009 | 1 | /root/ray_results/_objective_2021-10-05_11-51-07/_objective_86e23_00009_9_learning_rate=7.9616e-06,num_train_epochs=2,per_device_train_batch_size=32,seed=38.007_2021-10-05_11-51-43/error.txt | +------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ (pid=823) Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_layer_norm.weight', 'vocab_transform.bias', 'vocab_projector.weight', 'vocab_layer_norm.bias', 'vocab_projector.bias'] (pid=823) - This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). (pid=823) - This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). (pid=823) Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['classifier.bias', 'pre_classifier.weight', 'classifier.weight', 'pre_classifier.bias'] (pid=823) You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. (pid=823) 2021-10-05 11:51:52,224 ERROR function_runner.py:266 -- Runner Thread raised error. (pid=823) Traceback (most recent call last): (pid=823) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run (pid=823) self._entrypoint() (pid=823) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint (pid=823) self._status_reporter.get_checkpoint()) (pid=823) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func (pid=823) output = fn() (pid=823) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 282, in dynamic_modules_import_trainable (pid=823) return trainable(*args, **kwargs) (pid=823) File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner (pid=823) trainable(config, **fn_kwargs) (pid=823) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 183, in _objective (pid=823) local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) (pid=823) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1241, in train (pid=823) self.state.trial_params = hp_params(trial.assignments) if trial is not None else None (pid=823) AttributeError: 'dict' object has no attribute 'assignments' (pid=823) Exception in thread Thread-2: (pid=823) Traceback (most recent call last): (pid=823) File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner (pid=823) self.run() (pid=823) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 279, in run (pid=823) raise e (pid=823) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run (pid=823) self._entrypoint() (pid=823) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint (pid=823) self._status_reporter.get_checkpoint()) (pid=823) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func (pid=823) output = fn() (pid=823) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 282, in dynamic_modules_import_trainable (pid=823) return trainable(*args, **kwargs) (pid=823) File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner (pid=823) trainable(config, **fn_kwargs) (pid=823) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 183, in _objective (pid=823) local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) (pid=823) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1241, in train (pid=823) self.state.trial_params = hp_params(trial.assignments) if trial is not None else None (pid=823) AttributeError: 'dict' object has no attribute 'assignments' (pid=823) --------------------------------------------------------------------------- TuneError Traceback (most recent call last) <ipython-input-9-1f1a81b84f40> in <module>() 42 direction="maximize", 43 backend="ray", ---> 44 n_trials=10 # number of trials 45 ) 2 frames /usr/local/lib/python3.7/dist-packages/ray/tune/tune.py in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, local_dir, search_alg, scheduler, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, queue_trials, reuse_actors, trial_executor, raise_on_failed_trial, callbacks, loggers, ray_auto_init, run_errored_only, global_checkpoint_period, with_server, upload_dir, sync_to_cloud, sync_to_driver, sync_on_checkpoint, _remote) 553 if incomplete_trials: 554 if raise_on_failed_trial and not state[signal.SIGINT]: --> 555 raise TuneError("Trials did not complete", incomplete_trials) 556 else: 557 logger.error("Trials did not complete: %s", incomplete_trials) TuneError: ('Trials did not complete', [_objective_86e23_00000, _objective_86e23_00001, _objective_86e23_00002, _objective_86e23_00003, _objective_86e23_00004, _objective_86e23_00005, _objective_86e23_00006, _objective_86e23_00007, _objective_86e23_00008, _objective_86e23_00009]) ``` And this is the error I get when I switched Ray backend to Optuna: ``` loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333 Model config DistilBertConfig { "activation": "gelu", "architectures": [ "DistilBertForMaskedLM" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "transformers_version": "4.11.2", "vocab_size": 30522 } loading file https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99 loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4 loading file https://huggingface.co/distilbert-base-uncased/resolve/main/added_tokens.json from cache at None loading file https://huggingface.co/distilbert-base-uncased/resolve/main/special_tokens_map.json from cache at None loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json from cache at /root/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79 loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333 Model config DistilBertConfig { "activation": "gelu", "architectures": [ "DistilBertForMaskedLM" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "transformers_version": "4.11.2", "vocab_size": 30522 } Reusing dataset glue (/root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100% 3/3 [00:00<00:00, 47.70it/s] Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6be500ff95cfa94a.arrow Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0208e5893d9737cc.arrow 100% 2/2 [00:00<00:00, 6.15ba/s] PyTorch: setting up devices The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-). loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333 Model config DistilBertConfig { "activation": "gelu", "architectures": [ "DistilBertForMaskedLM" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "transformers_version": "4.11.2", "vocab_size": 30522 } loading weights file https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.bias', 'vocab_transform.weight', 'vocab_projector.weight', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.bias'] - This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight', 'pre_classifier.bias', 'pre_classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [I 2021-10-05 11:57:53,131] A new study created in memory with name: no-name-b8bec492-da86-496a-8dc9-889c57c2949a Trial: loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333 Model config DistilBertConfig { "activation": "gelu", "architectures": [ "DistilBertForMaskedLM" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "transformers_version": "4.11.2", "vocab_size": 30522 } loading weights file https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.bias', 'vocab_transform.weight', 'vocab_projector.weight', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.bias'] - This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight', 'pre_classifier.bias', 'pre_classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ***** Running training ***** Num examples = 3668 Num Epochs = 2 Instantaneous batch size per device = 64 Total train batch size (w. parallel, distributed & accumulation) = 64 Gradient Accumulation steps = 1 Total optimization steps = 116 [W 2021-10-05 11:57:54,441] Trial 0 failed because of the following error: AttributeError("'Trial' object has no attribute 'assignments'") Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/optuna/study/_optimize.py", line 213, in _run_trial value_or_values = func(trial) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 150, in _objective trainer.train(resume_from_checkpoint=checkpoint, trial=trial) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1241, in train self.state.trial_params = hp_params(trial.assignments) if trial is not None else None AttributeError: 'Trial' object has no attribute 'assignments' --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-10-ec85f0236770> in <module>() 42 direction="maximize", 43 backend="optuna", ---> 44 n_trials=10 # number of trials 45 ) 8 frames /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1239 self.callback_handler.train_dataloader = train_dataloader 1240 self.state.trial_name = self.hp_name(trial) if self.hp_name is not None else None -> 1241 self.state.trial_params = hp_params(trial.assignments) if trial is not None else None 1242 # This should be the same if the state has been saved but in case the training arguments changed, it's safer 1243 # to set this after the load. AttributeError: 'Trial' object has no attribute 'assignments' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> <!-- A clear and concise description of what you would expect to happen. -->
10-05-2021 12:40:30
10-05-2021 12:40:30
This hit me too. It seems related to #13572. I can't find a workaround to get the "ray" backend functioning in v4.11.2.<|||||>Thanks for raising this issue, @tlby @tolgayan. Let me find someone on my team to help address!<|||||>Yes, this is indeed introduced on our side by merging #13572 without taking enough care. We will fix tomorrow and make a patch release. Sorry about this!
transformers
13,874
closed
Add TrOCR + VisionEncoderDecoderModel
# What does this PR do? This PR adds the [TrOCR models](https://github.com/microsoft/unilm/tree/6f60612e7cc86a2a1ae85c47231507a587ab4e01/trocr) by Microsoft, together with a new `VisionEncoderDecoderModel` class (which should be used in order to use TrOCR, as it consists of an image encoder and an autoregressive text decoder). This PR is very similar to #13186, it's just the vision counterpart. Here's how to use this model: ``` from transformers import TrOCRProcessor, VisionEncoderDecoderModel import requests from PIL import Image processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") pixel_values = processor(image, return_tensors="pt").pixel_values generated_ids = model.generate(input_ids=pixel_values) print(processor.batch_decode(generated_ids)[0]) ``` There's also this Colab notebook for quick inference: https://colab.research.google.com/drive/1qCuqlqc4V9LZhPkxIi_XqCCTQrDhkmHi?usp=sharing ! A big disclaimer: the TrOCR models do not directly work on entire images of PDFs etc. They are trained on single-text-line images. One needs a text detector first, before applying TrOCR. TrOCR is a text recognition model, not a text detection model. One typically needs both in a sequence in order to extract all text from a given image. ## Important note: The current design of the existing `EncoderDecoderModel`/`FlaxEncoderDecoderModel` is that, if the `hidden_size` of the encoder/decoder don't match, one creates a single projection layer to project the `encoder_hidden_states` to the same number of channels as the decoder. However, for TrOCR, this is not how it's done. Instead, one projects the `encoder_hidden_states` to the same dimension as the decoder when projecting to keys and values, in each decoder layer. Therefore, my proposal is to add an attribute to the config of the decoder called `encoder_hidden_size`, which, if specified, will be used in the `VisionEncoderDecoderModel` class to not project the encoder hidden states. Instead, it will be used when instantiating the key and value projection layers. For consistency, we could also add this to the existing EncoderDecoderModel/FlaxEncoderDecoderModel. Also relevant for the FlaxVisionEncoderDecoderModel PR, see #13359. ## To do: - [ ] currently you have to pass `input_ids=pixel_values` to the `generate` method, which is not ideal. I'm in favor of letting the generate method accept an argument called `inputs`, which can work with text, images, speech. Related to #13825. - [ ] make sure all vision models accept `attention_mask` as input. Related to #13764.
10-05-2021 12:01:01
10-05-2021 12:01:01
transformers
13,873
closed
Fixing question-answering with long contexts
Fixes #13811 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-05-2021 10:26:12
10-05-2021 10:26:12
Having a similar issue where I used a QA model from transformers, tweaked the model in label studio making some annotations and then tried to load the model back again. The model is pulling out the correct answers but seemingly hadnle_impossible_answers isn't working because it gives an answer for every questions even when the question is irrelevant.. What's even weirder is that it doesn't do this in label studio's interface so seemingly this handle_impossible_answers is working on that side. My contexts I pass through the pipeline are also quite long Made sure I have the most up to date transformers model, and the models were saved out with save_pretrained ``` model_path = "PATH" model = AutoModelForQuestionAnswering.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) QA = pipeline('question-answering', model=model, tokenizer=tokenizer) # Other code... answers = QA(question=questions, context=context, top_k=1, max_answer_len=32, handle_impossible_answer=True) ``` The answers that I get out of here have an answer for every question, even when it doesn't need to be answered. I've tried loading the models multiple ways - the only way I got handle_impossible_answers to work as intended is if the tokenizer wasn't the AutoTokenizer that I had used here. But then the answers it gave me were complete garbage. Anybody else run into this issue with AutoTokenizer?? <|||||>The only was I can think the tokenizer could be involved would be if it doesn't include EOS/BOS. For null answer to come out, the score has to be the highest when the start and end logits are the first one (which should be a BOS/CLS token). Did you finetune your model on such null answers ? Maybe the finetuning just made that output impossible for the model to produce ?
transformers
13,872
closed
Fix flax summarization example: save checkpoint after each epoch and push checkpoint to the hub
# What does this PR do? Current `run_summarization_flax.py` saves checkpoint & pushs to the hub UNDER the block `if training_args.do_predict:` https://github.com/huggingface/transformers/blob/7051b892671982717388dd6dfca368a90da60252/examples/flax/summarization/run_summarization_flax.py#L811 This block is also outside the epoch looping block, and the saving/pushing won't happen after each epoch as expected. This PR moves saving/pushing block to the right place. ## Who can review? @patil-suraj @patrickvonplaten
10-05-2021 10:10:57
10-05-2021 10:10:57
transformers
13,871
closed
Replace assert statements with exceptions
# What does this PR do? Replaces the assertions in file_utils.py used in class ModelOutput for safety and consistency checks with ValueError exceptions. Contributes towards fixing issue #12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
10-05-2021 09:16:41
10-05-2021 09:16:41
transformers
13,870
closed
Computer shuts down when multiple gpus are used
## Environment info - transformers version: 4.10.3 - Platform: Linux-4.15.0-159-generic-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyTorch version (GPU?): 1.9.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes (through default settings) - ### Who can help Library: - trainer: @sgugger - ## Information Model I am using: AutoModelForSequenceClassification The problem arises when using: * [x] the official example scripts: https://huggingface.co/transformers/training.html#fine-tuning-in-pytorch-with-the-trainer-api The tasks I am working on is: * [x] an official GLUE/SQUaD task: IMDB task ## To reproduce Steps to reproduce the behavior: here is an implementation that was run ``` from transformers import TrainingArguments from transformers import Trainer from transformers import AutoModelForSequenceClassification from transformers import AutoTokenizer from datasets import load_from_disk device = torch.device('cuda:0') raw_datasets = load_from_disk('imdb') tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') model = AutoModelForSequenceClassification.from_pretrained('bert-base-cased', num_labels=2) def tokenize_function(examples): tokenized = tokenizer( examples["text"], padding="max_length", truncation=True) return tokenized tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) tokenized_datasets.set_format('torch', columns=[ 'attention_mask', 'input_ids', 'label', 'token_type_ids']) small_train_dataset = tokenized_datasets["train"].shuffle( seed=42).select(range(10)) small_eval_dataset = tokenized_datasets["test"].shuffle( seed=42).select(range(10)) training_args = TrainingArguments("test_trainer") model.to(device=device) trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset ) trainer.train() ``` As the trainer starts to train, the computer shuts off. The issue was observed only when there were 2 gpus (both Titan rtx). It doesn't seem to be a load or power issue, because the issue occurs even with very small datasizes. Through some testing, I suspect that the issue is occurring when the trainer is trying to use both gpus. Since the trainer's default behavior is to use both gpus if possible, I was able to avoid the issue by using CUDA_VISIBLE_DEVICES =0, so that the trainer does not attempt to send data to both gpus. This prevents the issue, but also prevents the system from using both GPUs. The computer shut off before producing any meaningful log message. It's very odd that the system is struggling with such a small load from a simple test code. Any help in diagnosing this problem would be very much appreciated.
10-05-2021 02:15:20
10-05-2021 02:15:20
It is incredibly unlikely that this is caused directly by `transformers` code, or any specific code for that matter. It is much more likely that this is a hardware issue. First thing that comes to mind is that your PSU cannot handle the spike when initializing, or that you have not used separate V lanes for each GPU.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,869
closed
'list' object has no attribute 'tolist error while using 'deepset/xlm-roberta-large-squad2'
Hi there, I am using 'deepset/xlm-roberta-large-squad2' for Question Answering prediction on my own dataset using transformers QuestionAnsweringPipeline. I get the following error and I can't handle it as it has something to do with 'p_mask' in preprocess(self, example, padding, doc_stride, max_question_len, max_seq_len) in transformers/pipelines/question_answering.py. The error that was thrown is: AttributeError Traceback (most recent call last) in () 7 for i in range(len(qa_df)): ----> 9 prediction = QA_obj.predict_answers(qa_df.loc[i,'Question'], qa_df.loc[i,'ID'], [qa_df.loc[i,'Passage']]) 10 all_predictions.append(prediction) 11 4 frames /usr/local/lib/python3.7/dist-packages/transformers/pipelines/question_answering.py in preprocess(self, example, padding, doc_stride, max_question_len, max_seq_len) 314 attention_mask=attention_mask_span_idx, 315 token_type_ids=token_type_ids_span_idx, --> 316 p_mask=p_mask[span_idx].tolist(), 317 encoding=encoded_inputs[span_idx], 318 # We don't use the rest of the values - and actually AttributeError: 'list' object has no attribute 'tolist' Any help would be much appreciated. Thanks.
10-05-2021 00:05:20
10-05-2021 00:05:20
Probably the same error as #13811, cc'ing @Narsil <|||||>Should be fixed by https://github.com/huggingface/transformers/pull/13873 can you confirm ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,868
closed
Can't use add_prefix_space in RobertaTokenizerFast
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.2 - Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.31 - Python version: 3.9.1 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> - tokenizers: @LysandreJik - Documentation: @sgugger ## Information Model I am using: RobertaTokenizerFast The problem arises when using RobertaTokenizerFast with add_prefix_space: ``` tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base') inputs = tokenizer("This is a sentence.", add_prefix_space=True) ``` it gives me error: `TypeError: _batch_encode_plus() got an unexpected keyword argument 'add_prefix_space'` However, the doc says > You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text I wonder how I can specify this when I call the tokenizer on some text. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Be able to pass in `add_prefix_space=True` after initializing the tokenizer, just like RobertaTokenizer. <!-- A clear and concise description of what you would expect to happen. -->
10-04-2021 20:56:28
10-04-2021 20:56:28
Hi there! Instantiating is different from calling, it means creating the object. So you create the tokenizer: ``` tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', add_prefix_space=True) ```<|||||>Thanks! I see the difference, I'm trying ask if we can instantiate tokenizer without `add_prefix_space=True` and call it with `add_prefix_space=True`. Because the doc seems to say we can do that. And also we can do that for `RobertaTokenizer`.<|||||>No, the latter is not possible. I see what you were saying at first, the doc should indeed be adapted to remove the part "when you call it"
transformers
13,867
closed
Add TF notebooks
This PR adds the TF notebooks to the notebooks section of the site
10-04-2021 20:05:39
10-04-2021 20:05:39
transformers
13,866
closed
How to export BertForMaskedLM to onxx using onxx export script?
Where would I add parameter that I want to convert BertForMaskedLM to onxx. I found the code bellow, but it probably only exports just bert model and not the one with MaskedLM head. from pathlib import Path from transformers.convert_graph_to_onnx import convert convert(framework="pt", model="bert-base-cased", output=Path("onnx/bert-base-cased.onnx"), opset=11)
10-04-2021 19:46:57
10-04-2021 19:46:57
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,865
closed
Update no_* argument (HfArgumentParser)
Changes the order so that the no_* argument is created after the original argument AND sets the default for this no_* argument to False. Functionality-wise this should not change anything and there should not be any breaking changes. What does change, is usability, as it is now less ambiguous to users what the arguments' real default is. E.g.: **Before** ``` --no_dataloader_pin_memory Whether or not to pin memory for DataLoader. (default: True) --dataloader_pin_memory [DATALOADER_PIN_MEMORY] Whether or not to pin memory for DataLoader. (default: True) ``` **After** ``` --dataloader_pin_memory [DATALOADER_PIN_MEMORY] Whether or not to pin memory for DataLoader. (default: True) --no_dataloader_pin_memory Whether or not to pin memory for DataLoader. (default: False) ``` Fixes #13847 @sgugger
10-04-2021 15:07:51
10-04-2021 15:07:51
transformers
13,864
closed
Update FSNER code in examples->research_projects->fsner
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> - Adds support to pass in variable number of support examples to FSNER model - Removes using hard-coded start_token and end_token ids, instead retrieves from tokenizer ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-04-2021 15:03:37
10-04-2021 15:03:37
transformers
13,863
closed
overflow error when initialising transformerXL model (transfo-xl-wt103)
## Information I don't if I'm allowed to post this here as I use pytorch-pretrained-bert, but maybe someone knows this error. I want to run the code below: ``` import torch from pytorch_pretrained_bert import TransfoXLTokenizer, TransfoXLModel, TransfoXLLMHeadModel import logging logging.basicConfig(level=logging.INFO) model = TransfoXLModel.from_pretrained('transfo-xl-wt103') ``` But I get the following error: ``` Traceback (most recent call last): File "load_transfo.py", line 7, in <module> model = TransfoXLModel.from_pretrained('transfo-xl-wt103') File "/home/user123/anaconda3/envs/test_transfo372/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling_transfo_xl.py", line 939, in from_pretrained model = cls(config, *inputs, **kwargs) File "/home/user123/anaconda3/envs/test_transfo372/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling_transfo_xl.py", line 1033, in __init__ div_val=config.div_val) File "/home/user123/anaconda3/envs/test_transfo372/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling_transfo_xl.py", line 780, in __init__ self.emb_layers.append(nn.Embedding(r_idx-l_idx, d_emb_i)) File "/home/user123/anaconda3/envs/test_transfo372/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 100, in __init__ self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim)) RuntimeError: $ Torch: invalid memory size -- maybe an overflow? at /pytorch/aten/src/TH/THGeneral.cpp:188 ``` Has somebody an idea how to get this work? When I use newer versions of torch and pytorch-pretrained-bert I just get a different RuntimeError (RuntimeError: Trying to create tensor with negative dimension -200001: [-200001, 16]) ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: I use pytorch-pretrained-bert=0.6.1 - Platform: Debian GNU/Linux 9.13 (stretch) - Python version: 3.7 - PyTorch version (GPU?): 1.0.1 - Tensorflow version (GPU?):not installed Models: @patrickvonplaten
10-04-2021 14:39:13
10-04-2021 14:39:13
Hey @blrtvs, Could you maybe try to use a `transformers` version > 2.0 instead?<|||||>@patrickvonplaten Yes, I tried it and it works so far. But I'm trying to get the LAMA code from facebookResearch working and it uses pytorch-pretrained-bert. When I use transformers instead, I get many other exception due to slightly different syntax and things like that. But I already postet my issue on the LAMA Repo, maybe someone will answer. But the invalid memory/ overflow exception from above is solved, thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,862
closed
Fixing 1-length special tokens cut.
# What does this PR do? Fixes issue where special tokens of length 1 would not be cut. The core of the issue, is that we would check for trie match AFTER moving 2 characters (1 character for first match, and ANOTHER one in the regular branch). The fix does: - Check for termination before moving ahead in the regular branch (cleaner) - Adds another termination check at the end of the string because we might still have dangling states at that point. - Adds a few tests for those test cases directly for Trie tests. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-04-2021 13:22:37
10-04-2021 13:22:37
I am running slow tokenizer tests on a box just to be sure before merging <|||||>``` ================================================================================================================================ short test summary info ================================================================================================================================= FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_attention_outputs - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_beam_sample_generate - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_beam_sample_generate_dict_output - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_beam_search_generate - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_beam_search_generate_dict_output - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_beam_search_generate_dict_outputs_use_cache - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_correct_missing_keys - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_determinism - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_feed_forward_chunking - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_forward_signature - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_generate_with_head_masking - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_generate_without_input_ids - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_greedy_generate - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_greedy_generate_dict_outputs - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_greedy_generate_dict_outputs_use_cache - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_group_beam_search_generate - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_group_beam_search_generate_dict_output - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_headmasking - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_hidden_states_output - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_initialization - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_inputs_embeds - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_load_with_mismatched_shapes - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_model_common_attributes - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_model_outputs_equivalence - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_problem_types - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_resize_embeddings_untied - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_resize_tokens_embeddings - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_sample_generate - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_sample_generate_dict_output - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_save_load - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_save_load_fast_init_from_base - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_save_load_fast_init_to_base - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_save_load_keys_to_ignore_on_save - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_seq_classification_use_mems_train - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_tie_model_weights - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_torch_fx - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_torch_fx_output_loss - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_torchscript - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_torchscript_output_attentions - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_torchscript_output_hidden_state - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_training - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_training_gradient_checkpointing - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_xlnet_base_model - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_xlnet_base_model_use_mems - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_xlnet_base_model_with_att_output - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_xlnet_lm_head - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_xlnet_qa - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_xlnet_sequence_classif - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_cpm.py::XLNetModelTest::test_xlnet_token_classif - RuntimeError: CUDA error: out of memory FAILED tests/test_tokenization_deberta_v2.py::DebertaV2TokenizationTest::test_tf_encode_plus_sent_to_model - tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed to allocate memory [Op:AddV2] FAILED tests/test_tokenization_fnet.py::FNetTokenizationTest::test_tokenizer_integration - requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/google/fnet-base/revision/58e0d1f96af163dc8d0a84a2fddf4bd403e4e802 FAILED tests/test_tokenization_layoutlmv2.py::LayoutLMv2TokenizationTest::test_torch_encode_plus_sent_to_model - TypeError: 'NoneType' object is not iterable FAILED tests/test_tokenization_layoutlmv2.py::LayoutLMv2TokenizationTest::test_torch_encode_plus_sent_to_model - ValueError: too many values to unpack (expected 2) FAILED tests/test_tokenization_roformer.py::RoFormerTokenizationTest::test_saving_tokenizer_trainer - TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType Results (6202.16s): 4427 passed 54 failed - tests/test_modeling_common.py:389 XLNetModelTest.test_attention_outputs - tests/test_generation_utils.py:842 XLNetModelTest.test_beam_sample_generate - tests/test_generation_utils.py:879 XLNetModelTest.test_beam_sample_generate_dict_output - tests/test_generation_utils.py:676 XLNetModelTest.test_beam_search_generate - tests/test_generation_utils.py:732 XLNetModelTest.test_beam_search_generate_dict_output - tests/test_generation_utils.py:788 XLNetModelTest.test_beam_search_generate_dict_outputs_use_cache - tests/test_modeling_common.py:1263 XLNetModelTest.test_correct_missing_keys - tests/test_modeling_common.py:310 XLNetModelTest.test_determinism - tests/test_modeling_common.py:1048 XLNetModelTest.test_feed_forward_chunking - tests/test_modeling_common.py:328 XLNetModelTest.test_forward_signature - tests/test_generation_utils.py:1076 XLNetModelTest.test_generate_with_head_masking - tests/test_generation_utils.py:938 XLNetModelTest.test_generate_without_input_ids - tests/test_generation_utils.py:517 XLNetModelTest.test_greedy_generate - tests/test_generation_utils.py:528 XLNetModelTest.test_greedy_generate_dict_outputs - tests/test_generation_utils.py:557 XLNetModelTest.test_greedy_generate_dict_outputs_use_cache - tests/test_generation_utils.py:957 XLNetModelTest.test_group_beam_search_generate - tests/test_generation_utils.py:1013 XLNetModelTest.test_group_beam_search_generate_dict_output - tests/test_modeling_common.py:712 XLNetModelTest.test_headmasking - tests/test_modeling_common.py:944 XLNetModelTest.test_hidden_states_output - tests/test_modeling_common.py:296 XLNetModelTest.test_initialization - tests/test_modeling_common.py:1395 XLNetModelTest.test_inputs_embeds - tests/test_modeling_common.py:1616 XLNetModelTest.test_load_with_mismatched_shapes - tests/test_modeling_common.py:1253 XLNetModelTest.test_model_common_attributes - tests/test_modeling_common.py:1327 XLNetModelTest.test_model_outputs_equivalence - tests/test_modeling_common.py:1573 XLNetModelTest.test_problem_types - tests/test_modeling_common.py:1202 XLNetModelTest.test_resize_embeddings_untied - tests/test_modeling_common.py:1150 XLNetModelTest.test_resize_tokens_embeddings - tests/test_generation_utils.py:585 XLNetModelTest.test_sample_generate - tests/test_generation_utils.py:630 XLNetModelTest.test_sample_generate_dict_output - tests/test_modeling_common.py:144 XLNetModelTest.test_save_load - tests/test_modeling_common.py:205 XLNetModelTest.test_save_load_fast_init_from_base - tests/test_modeling_common.py:250 XLNetModelTest.test_save_load_fast_init_to_base - tests/test_modeling_common.py:170 XLNetModelTest.test_save_load_keys_to_ignore_on_save - tests/test_modeling_xlnet.py:565 XLNetModelTest.test_seq_classification_use_mems_train - tests/test_modeling_common.py:1279 XLNetModelTest.test_tie_model_weights - tests/test_modeling_common.py:602 XLNetModelTest.test_torch_fx - tests/test_modeling_common.py:606 XLNetModelTest.test_torch_fx_output_loss - tests/test_modeling_common.py:504 XLNetModelTest.test_torchscript - tests/test_modeling_common.py:509 XLNetModelTest.test_torchscript_output_attentions - tests/test_modeling_common.py:515 XLNetModelTest.test_torchscript_output_hidden_state - tests/test_modeling_common.py:354 XLNetModelTest.test_training - tests/test_modeling_common.py:371 XLNetModelTest.test_training_gradient_checkpointing - tests/test_modeling_xlnet.py:554 XLNetModelTest.test_xlnet_base_model - tests/test_modeling_xlnet.py:559 XLNetModelTest.test_xlnet_base_model_use_mems - tests/test_modeling_xlnet.py:569 XLNetModelTest.test_xlnet_base_model_with_att_output - tests/test_modeling_xlnet.py:574 XLNetModelTest.test_xlnet_lm_head - tests/test_modeling_xlnet.py:589 XLNetModelTest.test_xlnet_qa - tests/test_modeling_xlnet.py:579 XLNetModelTest.test_xlnet_sequence_classif - tests/test_modeling_xlnet.py:584 XLNetModelTest.test_xlnet_token_classif - tests/test_tokenization_common.py:2067 DebertaV2TokenizationTest.test_tf_encode_plus_sent_to_model - tests/test_tokenization_fnet.py:433 FNetTokenizationTest.test_tokenizer_integration - tests/test_tokenization_layoutlmv2.py:1215 LayoutLMv2TokenizationTest.test_torch_encode_plus_sent_to_model - tests/test_tokenization_layoutlmv2.py:1215 LayoutLMv2TokenizationTest.test_torch_encode_plus_sent_to_model - tests/test_tokenization_common.py:3476 RoFormerTokenizationTest.test_saving_tokenizer_trainer ``` None of these seem to be linked to any of this, @sgugger are you ok with merging this ?<|||||>Let's wait for @LysandreJik review if you don't mind.
transformers
13,861
closed
Can I train an Rnd2GPT model through HuggingFace "Encoder-Decoder Model"?
Hi, I would like to train an Rnd2GPT model, whose encoder is a randomly initialized transformer encoder and the decoder utilizes the pre-trained GPT2 model. I found that HuggingFace's "Encoder-Decoder Model" could implement the architecture of "Bert2Bert" and "Bert2GPT" models. However, my source input is not a sentence that could be represented directly through the Bert model, it may be better to be encoded through an initialized transformer encoder. So, I would like to know how can I achieve the Rnd2GPT model through the Hugging Face "Encoder-Decoder Model"? Very grateful for your help! Thanks!
10-04-2021 12:11:49
10-04-2021 12:11:49
You can initialize an `EncoderDecoderModel` with any autoencoding text encoder and any autoregressive text decoder. These can be randomly initialized, or you can start from pre-trained checkpoints. So yes, it's totally possible to instantiate an `EncoderDecoderModel` with a randomly initialized BERT and a pre-trained GPT-2 model, like so: ``` from transformers import EncoderDecoderModel, BertConfig, BertModel, GPT2LMHeadModel # randomly initialized BERT encoder_config = BertConfig() encoder = BertModel(encoder_config) # pre-trained GPT-2 decoder = GPT2LMHeadModel.from_pretrained("gpt2") model = EncoderDecoderModel(encoder=encoder, decoder=decoder) ```<|||||>> You can initialize an `EncoderDecoderModel` with any autoencoding text encoder and any autoregressive text decoder. These can be randomly initialized, or you can start from pre-trained checkpoints. > > So yes, it's totally possible to instantiate an `EncoderDecoderModel` with a randomly initialized BERT and a pre-trained GPT-2 model, like so: > > ``` > from transformers import EncoderDecoderModel, BertConfig, BertModel, GPT2LMHeadModel > > # randomly initialized BERT > encoder_config = BertConfig() > encoder = BertModel(encoder_config) > > # pre-trained GPT-2 > decoder = GPT2LMHeadModel.from_pretrained("gpt2") > > model = EncoderDecoderModel(encoder=encoder, decoder=decoder) > ``` I see! Thank you so much for your kind reply! It means a lot to me!
transformers
13,860
closed
Speech2Text2 training support
# 🚀 Feature request Training support for Speech2Text2 https://github.com/huggingface/transformers/pull/13186 ## Motivation The Transformer release 4.11 has included theSpeechEncoderDecoder and Speech2Text2 decoder transformer @patrickvonplaten . However Speech2Text2 does not currently support training. There are some available models at [HuggingFace](https://huggingface.co/models?other=speech2text2) but it would be very interesting for the community to be able to train new languages. I am not not sure if it is possible to train other models in fairseq and to export them to huggingface. ## Your contribution I can contribute testing the code and providing training examples to the documentation
10-04-2021 11:02:24
10-04-2021 11:02:24
Maybe of interest to @patil-suraj too<|||||>Hey @ivangtorre, I've now ported the tokenizer files so that the model can be trained. I'll try to publish a notebook this week that showcases how to fine-tune the `SpeechEncoderDecoder` model class.<|||||>Hi @patrickvonplaten any update on this ? <|||||>My deadline predictions are again very much off - sorry! It is though definitely on my short-term ToDo List! A conservative estimate is by the end of next week now ;-) <|||||>Thanks for the response @patrickvonplaten , I tried writing one script if possible can you please let me know if the overall flow is right? I can compile it into a notebook and share here if that helps https://gist.github.com/arijitx/b853c8bc29c3935e26f69343b32ddb4e#file-s2t_finetune-py<|||||>Hi @patrickvonplaten any update on this ? Sorry for bugging :( <|||||>Script works on a dummy model: https://github.com/huggingface/transformers/pull/14792 - now testing on a real example<|||||>Hope to have it ready by tomorrow<|||||>Hi @patrickvonplaten I tried with your latest commit I'm getting this error ``` Traceback (most recent call last): File "run_seq2seq.py", line 561, in <module> main() File "run_seq2seq.py", line 488, in main processor = AutoProcessor.from_pretrained(training_args.output_dir) File "C:\Users\admin\anaconda3\lib\site-packages\transformers\models\auto\processing_auto.py", line 171, in from_pretrained return PROCESSOR_MAPPING[type(config)].from_pretrained(pretrained_model_name_or_path, **kwargs) File "C:\Users\admin\anaconda3\lib\site-packages\transformers\models\auto\auto_factory.py", line 559, in __getitem__ raise KeyError(key) KeyError: <class 'transformers.models.speech_encoder_decoder.configuration_speech_encoder_decoder.SpeechEncoderDecoderConfig'> ```<|||||>Hey @arijitx, Good observation! With this PR: https://github.com/huggingface/transformers/pull/14881 being merged it should work now. Could you try again? :-)<|||||>You will have to re-run this part: ```python from transformers import SpeechEncoderDecoderModel, AutoFeatureExtractor, AutoTokenizer, Wav2Vec2Processor # checkpoints to leverage encoder_id = "facebook/wav2vec2-base" decoder_id = "facebook/bart-base" # load and save speech-encoder-decoder model # set some hyper-parameters for training and evaluation model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=True, encoder_feat_proj_dropout=0.0, encoder_layerdrop=0.0, max_length=200, num_beams=5) model.config.decoder_start_token_id = model.decoder.config.bos_token_id model.config.pad_token_id = model.decoder.config.pad_token_id model.config.eos_token_id = model.decoder.config.eos_token_id model.save_pretrained("./") # load and save processor feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id) tokenizer = AutoTokenizer.from_pretrained(decoder_id) processor = Wav2Vec2Processor(feature_extractor, tokenizer) processor.save_pretrained("./") ```<|||||>Hi @patrickvonplaten , I'm running again from a build freshly installed from master getting this error, ``` Traceback (most recent call last): File "run_seq2seq.py", line 594, in <module> main() File "run_seq2seq.py", line 286, in main model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=True, encoder_feat_proj_dropout=0.0, encoder_layerdrop=0.0, max_length=200, num_beams=5) File "/XX/lib/python3.6/site-packages/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py", line 394, in from_encoder_decoder_pretrained encoder = AutoModel.from_pretrained(encoder_pretrained_model_name_or_path, *model_args) File "/XX/lib/python3.6/site-packages/transformers/models/auto/auto_factory.py", line 449, in from_pretrained f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" ValueError: Unrecognized configuration class <class 'transformers.models.speech_encoder_decoder.configuration_speech_encoder_decoder.SpeechEncoderDecoderConfig'> for this kind of AutoModel: AutoModel. Model type should be one of ImageGPTConfig, QDQBertConfig, FNetConfig, SegformerConfig, VisionTextDualEncoderConfig, PerceiverConfig, GPTJConfig, LayoutLMv2Config, BeitConfig, RemBertConfig, VisualBertConfig, CanineConfig, RoFormerConfig, CLIPConfig, BigBirdPegasusConfig, DeiTConfig, LukeConfig, DetrConfig, GPTNeoConfig, BigBirdConfig, Speech2TextConfig, ViTConfig, Wav2Vec2Config, M2M100Config, ConvBertConfig, LEDConfig, BlenderbotSmallConfig, RetriBertConfig, IBertConfig, MT5Config, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, BertGenerationConfig, CamembertConfig, XLMRobertaConfig, PegasusConfig, MarianConfig, MBartConfig, MegatronBertConfig, MPNetConfig, BartConfig, BlenderbotConfig, ReformerConfig, LongformerConfig, RobertaConfig, DebertaV2Config, DebertaConfig, FlaubertConfig, FSMTConfig, SqueezeBertConfig, HubertConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMProphetNetConfig, ProphetNetConfig, XLMConfig, CTRLConfig, ElectraConfig, FunnelConfig, LxmertConfig, DPRConfig, LayoutLMConfig, TapasConfig, SplinterConfig, SEWDConfig, SEWConfig, UniSpeechSatConfig, UniSpeechConfig, WavLMConfig. ```<|||||>Hey @arijitx, Could you give the full command that you ran (including the code to create the model?). On my side, the example seems to work. Happy to dive a bit deeper into your code if you could make a reproducible error. Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,859
closed
Fixing empty prompts for text-generation when BOS exists.
# What does this PR do? Fixes #13774 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-04-2021 10:31:31
10-04-2021 10:31:31
It seems there is an issue with the TensorFlow pipeline tests
transformers
13,858
closed
there is not more toolkit and model architecture about tabular format in hugginggface, tabular text in NLP = language + layout
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> last few years, many industry work of deep learning about tabular text boom, such as "CascadeTabNet","tablenet". many old work about tabular text developed from web html table, but there is no united toolkit and mode architecture to accelerate the tabluar text research in NLP. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
10-04-2021 10:14:13
10-04-2021 10:14:13
We have support for the TAPAS model in `transformers` - would it help your use-case?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,857
closed
Update run_qa.py - CorrectTypo
# What does this PR do? Correct Typo - Resolve #13382 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-04-2021 08:56:57
10-04-2021 08:56:57
transformers
13,856
closed
Fixing GPU for token-classification in a better way.
# What does this PR do? We probably need a way to ensure this test is ran ! Superseeds https://github.com/huggingface/transformers/pull/13819 (different fix for same issue) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-04-2021 07:59:58
10-04-2021 07:59:58
transformers
13,855
closed
Fixing Backward compatiblity for zero-shot
Fixes #13846 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-04-2021 07:34:30
10-04-2021 07:34:30
transformers
13,854
closed
[WIP] Add MarianMT to models exportable with ONNX
Resolves #13823 @patil-suraj @michaelbenayoun
10-04-2021 07:08:12
10-04-2021 07:08:12
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,853
closed
Issues with FNet
This is a tracking issue for problems/enhancements needed with FNet. So far, @stefan-it @ontocord have mentioned that: * [ ] `--fp16` precision throws an error with the model. * [ ] It is not possible to keep variable sequence length during training because of the way DFT matrices are initialized (via a config variable). - For this @ontocord suggested that we keep different buffers ready. A major drawback to this is that we can't possibly keep dft matrices for all sequence lengths. However, the values of the DFT matrices vary according to the total sequence length expected. - So, one efficient way of handling this is that we can try to create a generic matrix (not DFT) and then modify it on the fly accordingly based on the sequence length. - For example, the DFT matrix is defined as: ![image](https://user-images.githubusercontent.com/29076344/135800441-2d5f69e8-9231-4ccc-a971-52f60c4059e5.png) - We can create a matrix without the multiplier, and then on the fly take the portion of the matrix that is needed for the batch, and then multiply with the correct multiplier. Wdyt @patrickvonplaten @sgugger? - Or multiply the matrix with `sqrt(N)/sqrt(seq_length)` and take `mat[:seq_length, :seq_length]` while multiplying. * [ ] Need to verify if pushing to GPU pushes everything to the device (including buffers). I will be adding more issues/problems here as and when they arise.
10-04-2021 06:01:35
10-04-2021 06:01:35
I think if PyTorch doesn't support --fp16 for the fourier transform, we sadly can't do much here. Regarding the second point, I think keeping a buffer would indeed be a good solution here!<|||||>I think you can keep N (maybe 5) fixed buffer, like 64, 128, 256, 512, 1024. You would need to pad for those buffers that are too long. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>Should we implement a buffer solution here? @gchhablani - do you think this makes sense? How big would the buffer be in terms of size? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,852
closed
Delete MultiBERTs conversion script
# What does this PR do? This PR deleted the MultiBERTs conversion script as it is not needed. See discussion on #13077 for more details. @patrickvonplaten
10-04-2021 05:12:39
10-04-2021 05:12:39
transformers
13,851
closed
Remove a duplicated bullet point in the GPT-J doc
# What does this PR do? Remove a duplicated bullet point in the GPT-J doc. ## Before submitting - [x] This PR fixes a typo or improves the docs. ## Who can review? @sgugger, @patrickvonplaten, @LysandreJik
10-04-2021 04:15:11
10-04-2021 04:15:11
Thanks!
transformers
13,850
closed
Update examples/documentation for torch.distributed.run
Most examples now read local rank from a CLI argument `--local_rank`. This ensures for it to work when using the torch launch utility `torch.distributed.launch`. However, since recent torch versions, `launch` is deprecated. The new suggested way to run distributed code from CLI is with `torch.distributed.run`. The difference from before is that instead of automatically passing CLI arguments, it sets the rank **as an environment variable**. The examples therefore need to be updated to read the ENV variable instead of the CLI argument. See https://pytorch.org/docs/stable/elastic/run.html#launcher-api for more.
10-03-2021 19:02:25
10-03-2021 19:02:25
I missed that the training arguments already have this covered: https://github.com/huggingface/transformers/blob/bcc3f7b6560c1ed427f051107c7755956a27a9f2/src/transformers/training_args.py#L681-L685 Documentation may need an update though (see below).<|||||>Hi @BramVanroy! Let us know if this isn't sufficiently documented so that we may patch the issue<|||||>@LysandreJik I think the code in the TrainingArgs is fine. Documentation may need an update as it refers to `launch` in a couple of places. A quick search gives me mentions of `torch.distributed.launch` in these documents that may need to be replaced by `torch.distributed.run`. You'd also need to check whether `--nproc_per_node` is still an accepted argument for that command because I am not sure. So it is not only a find-and-replace update! - https://huggingface.co/transformers/performance.html - https://huggingface.co/transformers/main_classes/deepspeed.html - https://huggingface.co/transformers/debugging.html<|||||>Maybe of interest to @sgugger <|||||>I wouldn't update the documentation as it won't work for older versions of PyTorch and I would prefer to only put one command vs If your PyTorch version is this then xxx else yyy. Let's wait a bit more :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>My previous comment still stands :-)<|||||>Definitely. I think it's a good idea to keep bumping this to open, so that others can also find this issue easily, and it can serve as a reminder that as soon as `launch` is obsolete, the docs need an update. It's a tiny thing and easy to forget, so an open issue may help to remember as time goes on.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,849
closed
Unexpected behaviour with past_key_values in GPT2
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.2 - Platform: Ubuntu 20.04 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9 GPU - Tensorflow version (GPU?): - Using GPU in script?: rtx 3070 - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten, @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): GPT2LMHeadModel, gpt2-medium When using GPT2LMHeadModel with past_key_values as an input, the output logits are different than expected. I expected that when I run inference with the context ''Image of a" and when I run inference with the context "a" and past_key_values of previous run with the context "Image of", the output logits will be the same, but they are slightly different. I added code below. Is this a bug, or am I missing something? ## To reproduce Run the code: ``` from transformers.models.gpt2 import GPT2LMHeadModel, GPT2Tokenizer import numpy as np import torch seed = 0 device = "cuda" if torch.cuda.is_available() else "cpu" # set Random seed torch.manual_seed(seed) np.random.seed(seed) lm_tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') lm_model = GPT2LMHeadModel.from_pretrained('gpt2-medium', output_hidden_states=True) lm_model.to(device) lm_model.eval() raw_text = 'Image of a' tokenized_cond_text = lm_tokenizer.encode(lm_tokenizer.bos_token + raw_text) context_t = torch.tensor(tokenized_cond_text, device=device, dtype=torch.long) while len(context_t.shape) < 2: context_t = context_t.unsqueeze(0) output_so_far = context_t past = lm_model(output_so_far[:, :-1])["past_key_values"] last = output_so_far[:, -1:] logits_past = lm_model(last, past_key_values=past)['logits'] logits_normal = lm_model(output_so_far)['logits'][:, -1:, :] print(logits_normal.mean().cpu().item(), logits_past.mean().cpu().item()) ``` My output: `-59.50511169433594 -59.50926971435547` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I expected for the logits to be the same (and therfore have the same mean). Am I missing something? <!-- A clear and concise description of what you would expect to happen. -->
10-03-2021 18:55:50
10-03-2021 18:55:50
Hey @YoadTew - good observation! We consider a tolerance of 1e-3 to be the same, so they are the same in this case :-) To give you a bit more background why they are slighly different: If we make use of `past_key_values` then only the last query vector is forwarded through the model. If we don't make use of `past_key_values` then all query vectors are forwarded whereas all but the last query vector are masked with -10,000 a softmax of -10,000 is in comparison to other values almost always 0, but is not exactly 0 which leads to some small differences between padded forward and non-padded forward<|||||>@patrickvonplaten Oh really, I didn't know it. So the masked self attention is not perfect and it's the reason for the difference. The reason I noticed it is that the slight difference resulted in a slightly diffrent text generation. Thanks for the response.<|||||>I just ran into a similar issue which seems to be related to this. I ran the above setup myself and discovered that the error compounds over multiple iterations and the difference becomes quite significant: ```python from transformers.models.gpt2 import GPT2LMHeadModel, GPT2Tokenizer import numpy as np import torch # set Random seed seed = 0 torch.manual_seed(seed) np.random.seed(seed) tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = GPT2LMHeadModel.from_pretrained('gpt2-medium', output_hidden_states=True) model.eval() tokens = tokenizer.encode(tokenizer.bos_token + 'My name is') x = torch.tensor(tokens).unsqueeze(0) with torch.no_grad(): out = model(x) x_ = out.logits[:, -1].argmax(-1).long().unsqueeze(1) print(tokenizer.decode(x_.item())) for i in range(16): x = torch.cat([x[:, :-1], x_], dim=1) out = model(x_, past_key_values=out.past_key_values) logits1 = out.logits[:, -1] logits2 = model(x).logits[:, -1] print(logits1.mean().item(), logits2.mean().item()) x1, x2 = logits1.argmax(-1), logits2.argmax(-1) t1, t2 = tokenizer.decode(x1.item()), tokenizer.decode(x2.item()) print(t1, t2) assert x1 == x2, f"predictions weren't equal: '{t1}' and '{t2}'" x_ = x1.long().unsqueeze(1) ``` Running this snippet gives the following output before failing the assertion: ``` John -43.53556823730469 -48.440670013427734 , , -38.2783088684082 -59.84267807006836 and and -39.929866790771484 -71.5892562866211 I the ``` So not only does the difference in logits become quite large, eventually the output will be different as well, which is causing issues in my personal project. Is there a reason for masking the query vectors with -10'000 instead of -infinity? Or am I doing something wrong that's causing this non-deterministic behavior?<|||||>the reason we picked -10,000 is because it works fine with `fp16`. Does changing `-10,000` to `-inf` solve your problem?<|||||>Thank you for the followup! Is there a quick way to modify the attention masking or do I need to edit the Huggingface source code?<|||||>I'm afraid you probably have to edit the Hugging Face source code at the moment...out of curiosity - what is the use case that requires `use_cache=True` to give exactly the same outputs as `use_cache=False`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,848
closed
Fix broken link to distill models in docs
# What does this PR do? Fixes broken link to distill models in docs <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @sgugger @VictorSanh <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-03-2021 12:45:26
10-03-2021 12:45:26
Done<|||||>Thanks a lot!
transformers
13,847
closed
Default arguments of clm example are confusing
I was having a look at the `run_clm.py` script and which new arguments are available to push to the hub. ```sh python transformers\examples\pytorch\language-modeling\run_clm.py -h ``` I see the following options (note the True defaults for all): ``` --no_keep_linebreaks Whether to keep line breaks when using TXT files or not. (default: True) --keep_linebreaks [KEEP_LINEBREAKS] Whether to keep line breaks when using TXT files or not. (default: True) --no_dataloader_pin_memory Whether or not to pin memory for DataLoader. (default: True) --dataloader_pin_memory [DATALOADER_PIN_MEMORY] Whether or not to pin memory for DataLoader. (default: True) --no_skip_memory_metrics Whether or not to skip adding of memory profiler reports to metrics. (default: True) --skip_memory_metrics [SKIP_MEMORY_METRICS] Whether or not to skip adding of memory profiler reports to metrics. (default: True) ``` From this, I cannot figure out what the default behaviour is or what I should change to become the expected behavior. I do not know what the use case is for this but it seems much better to only keep one of each option. If one the two for each option is deprecated, then that could be added in the description too. I'm on current master (4.12 dev). ### Who can help @sgugger, @patil-suraj
10-02-2021 18:04:07
10-02-2021 18:04:07
Unfortunately, since the two arguments are accepted, there is no way for us to automate a better documentation of them from the `HfArgumentParser` (if you have ideas, by all means!) so you should rely on the documentation of [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments).<|||||>I went looking for the `no_*` arguments. It seems that they are dynamically generated: https://github.com/huggingface/transformers/blob/3a8de58c5192b620228128430ea52e6eda81c40a/src/transformers/hf_argparser.py#L112-L113 But I do not quite understand the use case for this. If the documentation only shows the version without `no_`, then why do they exist? Having two arguments for a boolean argument seems overkill. That being said, I am sure there are reasons for that. My suggestion to make this more usable would be to negate the default value for the `no_` field. This doe snot change the default behaviour as far as I tested and makes it clear to the user what the default behavior is. ``` import argparse cparser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) cparser.add_argument("--dataloader_pin_memory", default=True, action="store_true", help="Enable memory pinning for DataLoader") cparser.add_argument("--no_dataloader_pin_memory", default=False, action="store_false", dest="dataloader_pin_memory", help="Disable memory pinning for DataLoader") cargs = cparser.parse_args() print(vars(cargs)) ``` Help will look like this (with `False` on the no_ option): ``` optional arguments: -h, --help show this help message and exit --dataloader_pin_memory Enable memory pinning for DataLoader (default: True) --no_dataloader_pin_memory Disable memory pinning for DataLoader (default: False) ``` Behaviour as before: - default: {'dataloader_pin_memory': True} - `--dataloader_pin_memory`: {'dataloader_pin_memory': True} - `--no_dataloader_pin_memory`: {'dataloader_pin_memory': False} The "whether or not" in the original help description may also be confusing. Because you generate the second field dynamically, you could go so far as to be consistent with your description and simply do `field_help.replace("Enable", "Disable)`.<|||||>Like I said, the `no-` are automagically generated by the `HfArgumentParser`. We can't remove them without creating a breakign change. At the same time there is no point in adding the `no-` argument to the `TrainingArguments` class (or other dataclasses) which can also be used as is in a notebook.<|||||>I think you misunderstood my reply. I am suggesting to change this default True: https://github.com/huggingface/transformers/blob/3a8de58c5192b620228128430ea52e6eda81c40a/src/transformers/hf_argparser.py#L112-L113 into False ``` if field.default is True: parser.add_argument(f"--no_{field.name}", default=False, action="store_false", dest=field.name, **kwargs) ``` which as far I tested does not break anything as the result should be identical. But it has the changed bonus that the argparser --help is less ambiguous as it would have defaults dataloader_pin_memory: True, no_dataloader_pin_memory: False.<|||||>Let me double check, but that seems like a good change indeed. Thanks for explaining it to me!<|||||>Mmm, actually it looks like changing this `default` to `False` changes the default value in the argparser: tried to laundh the script with and without `--no_dataloader_pin_memory` and printed the value of `training_args.dataloader_pin_memory`. Currently we get False and True respectively (as it should). With the changed of default you are suggesting, I always get False.<|||||>The reason that it is False is because of the order of the arguments. The `no_` variant is added to the argparser first (before the actual argument), therefore its defaults will get precedence down the line. I can make a suggestion in a PR to move things around? That would involve moving this line https://github.com/huggingface/transformers/blob/3a8de58c5192b620228128430ea52e6eda81c40a/src/transformers/hf_argparser.py#L112-L113 to after this line https://github.com/huggingface/transformers/blob/3a8de58c5192b620228128430ea52e6eda81c40a/src/transformers/hf_argparser.py#L146 It is not visually as pleasing to repeat the if-clause but I'd argue that it could be worth it when documented well enough.<|||||>Oh the code of HfArgumentParser is not visually pleasing so that's not a problem ;-) If you can suggest a PR, I'll test on the branch that everything is good with it.
transformers
13,846
closed
Zero-shot classification pipeline change in 4.11.2
The latest version of the ZSL classification pipeline appears to give slightly different output for the following test code: ```python from transformers import pipeline nlp = pipeline("zero-shot-classification") print(nlp(["I am happy", "I am sad"], ["negative", "positive"])) ``` Version 4.10.2 output ``` [{'sequence': 'I am happy', 'labels': ['positive', 'negative'], 'scores': [0.9984542727470398, 0.0015457083936780691]}, {'sequence': 'I am sad', 'labels': ['negative', 'positive'], 'scores': [0.9985381364822388, 0.0014618091518059373]}] ``` Version 4.11.2 output ``` [[{'sequence': 'I am happy', 'labels': ['positive', 'negative'], 'scores': [0.9984542727470398, 0.0015457083936780691]}], [{'sequence': 'I am sad', 'labels': ['negative', 'positive'], 'scores': [0.9985381364822388, 0.00146180868614465]}]] ``` It appears that 4.11.2 is adding an extra list around each element. This can be manually processed out but I also wanted to pass this along in case this wasn't expected new behavior.
10-02-2021 17:59:46
10-02-2021 17:59:46
@Narsil @LysandreJik fyi Facing similar issue as well.<|||||>Thanks for letting us know, This is unknown breaking backward compatible, fortunately it's an easy fix ! <|||||>@Narsil Thank you for quick fix. Any idea when this fix will be released?<|||||>I will let core maintainers answer there but first the PR needs to be reviewed.
transformers
13,845
open
[RFC] Add `modeling_xxx_fusion.py` to support kernel fusion
## Introduction I am an engineer currently working on 3D model parallelism for transformers. When the tensor model parallelism (https://github.com/huggingface/transformers/pull/13726) is done, I am going to introduce [kernel fusion](https://stackoverflow.com/questions/53305830/cuda-how-does-kernel-fusion-improve-performance-on-memory-bound-applications-on) feature to transformers. ![image](https://user-images.githubusercontent.com/38183241/135726581-ee305818-c78a-439f-90b4-30cd1edbc1fe.png) For this, I want to create a new modeling file called `modeling_xxx_fusion.py`. This work is currently being discussed with @stas00 and @RezaYazdaniAminabadi (DeepSpeed team). ## Kernel fusion API ```python from transformers import BertForMaskedLM # create model model = BertForMaskedLM.from_pretrained("bert-base-cased") # 1. fuse_modules # `fuse_modules` is function level fusion, It supports a wide variety of models. # all arguments is `True` as default model.fuse_modules() # fuse selective modules model.fuse_modules( word_embedding=True, scale_mask_softmax=True, layer_norm=True, bias_act=True, bias_dropout_residual=False, cross_entropy=True, ) # 2. fuse_layers # `fuse_layers` is block level (attention & mlp) fusion, only a few models are supported. # argument (`inference`) is `None` -> `not self.training` of `torch.nn.Module` as default. model.fuse_layers(inference=None) # fuse layers for inference model.fuse_layers(inference=True) # fuse layers for training model.fuse_layers(inference=False) ``` ## Implementation The internal module of each model will be re-implemented using kernel fusion method, and the existed module will be replaced with the fused module. The following example is an example of `BertOutput(nn.Module)`. ```python # transformers/models/bert/modeling_bert.py class BertOutput(nn.Module): def __init__(self, config): super().__init__() self.dense = nn.Linear(config.intermediate_size, config.hidden_size) self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.dropout = nn.Dropout(config.hidden_dropout_prob) def forward(self, hidden_states, input_tensor): hidden_states = self.dense(hidden_states) hidden_states = self.dropout(hidden_states) hidden_states = self.LayerNorm(hidden_states + input_tensor) return hidden_states ``` ```python # transformers/models/bert/modeling_bert_fusion.py class FusedBertOutput(BertOutput): def forward(self, hidden_states, input_tensor): hidden_states = hidden_states @ self.dense.weight.t() hidden_states = FusedBiasDropoutResidual.apply(hidden_states, self.dense.bias, input_tensor) hidden_states = FusedLayerNorm.apply(hidden_states, self.LayerNorm.weight, self.LayerNorm.bias) return hidden_states ``` When the user calls the `fuse_modules()` method, the kernel fusion engine finds `BertOutput` and replaces it with `FusedBertOutput`. and user calls `fused_layers` method, engine finds `BertLayer` and replcases it with `FusedBertLayer`. This is the method that `parallelformers` parallelized transformers models flexibly, and the `deepspeed` also supports kernel fusion in this way. However, the current version of `deepspeed` fuses the entire transformer layer, so the supported models are very limited. For example, bigbird requires random attention mechanism. in this case random attention must be implemented in the custom cuda kernel. However, because the number of models is so large, it is impossible to implement them all. So I propose a flexible way to fuse the kernel on a per-function. This is a strategy of triage. The area that can be fused performs fusion, and the area that can not be fused uses the torch's default module. ```python # kernel_fusion_utils.py class KernelFusionMixin(object): def fuse_modules(...): assert self._is_able_to_fuse, "error message" ... implementation ... def fuse_layers(...) assert self._is_able_to_fuse, "error message" ... implementation ... ``` ```python # modeling_utils.py class PreTrainedModel(..., KernelFusionMixin): _is_parallelizable = ... _is_able_to_fuse = False. # <--- Only models that can be fused have `True`. ``` This is a draft. The API can be changed at any time. I look forward to feedback. I'm going to show you this soon with a framework I'm making. (Like parallelformers, we will pre-open the repositories on our side and merge them later on transformers and deepspeed.) cc. @Stas00 @RezaYazdaniAminabadi @Sylvain
10-02-2021 17:06:10
10-02-2021 17:06:10
Looks like an awesome plan, @hyunwoongko! So far your RFC looks excellent to me. I'd just suggest `s/_is_able_to_fuse/_is_fusable/` to use the same style as `_is_parallelizable` if that's what we use. But this is a minor details and we can rename easily later.<|||||>> I'd just suggest s/_is_able_to_fuse/_is_fusable/ to use the same style as _is_parallelizable if that's what we use. But this is a minor details and we can rename easily later. @stas00 You're right. It's because I'm not good at English (I didn't know there was a word `fusable`. lol)<|||||>In software we often create new words anyway, so as long as the composite of 2 words makes sense it works for our purposes. Latin-based languages all use a combination of a root with prefx/postfix, so if the word you want is not there already - create one ;)<|||||>## Review of Fused Kernels for transformer - written by Kevin Ko If you find other fused kernels, please let me know here. I'll test and record them. :) ## 1. Module-level Kernels Module-level Kernels are fused kernels for independent operation sets like `scale + mask + softmax` or `bias + dropout + residual`. This is in contrast to Layer-level kernels, which are kernels that fuse the entire transformer layer. Note all kernels must have both forward and backward when they are used for training. 01. **FusedScaleMaskSoftmax (from Megaton-LM)** https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/fused_softmax.py This is a kernel that fuses `scale + masking + softmax` for transformer attention. We tested this kernel, and it performs better than the original HuggingFace Transformers attention method. Note there are some constraints to turn this kernel on. but some of constraints defined in Megatron-LM aren't correct. So we modified some constraints. <p align="center"> <img src="https://user-images.githubusercontent.com/38183241/137086759-c793baa5-4cd7-450b-9176-0b278acbd492.png" width=360> <img src="https://user-images.githubusercontent.com/38183241/137052992-65e5dc57-2a86-4fc5-8c2c-4100852e9558.png" width=360> The left one is a picture when the constraints are satisfied, and the right one is a picture when the constraints are not satisfied. (in this case, the performance is the same with non-fused method) </p> 02. **FusedLayerNorm (from Megatron-LM and NVIDIA Apex)** https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/fused_layer_norm.py This is kernel that fuses all the operations of layer normalization. But when we tested this kernel, it was slower than the original `torch.nn.LayerNorm`. ``` dim 128 Batch Size 2048, Torch: 0.00031060 Apex: 0.00035981 dim 256 Batch Size 2048, Torch: 0.00030215 Apex: 0.00035082 dim 384 Batch Size 2048, Torch: 0.00033065 Apex: 0.00037036 dim 512 Batch Size 2048, Torch: 0.00029822 Apex: 0.00035301 dim 640 Batch Size 2048, Torch: 0.00031614 Apex: 0.00036779 dim 768 Batch Size 2048, Torch: 0.00030238 Apex: 0.00036041 dim 896 Batch Size 2048, Torch: 0.00029817 Apex: 0.00036967 dim 1024 Batch Size 2048, Torch: 0.00030955 Apex: 0.00036211 ``` Therefore, we have decided not to provide this kernel. See https://github.com/pytorch/pytorch/commit/8b87f9a5107e8b3c4f87d5297af698bb55838d81#diff-f12c726e3e8cd2b4768f8984fef27059. 03. **FusedBiasActivation (torch.jit.script)** https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/fused_bias_gelu.py This is kernel that fuses `bias addition + GeLU function`. All activation functions are supported because the user can use any activation function, but the speedup occurs only with the GeLU function. (The other activation functions work the same as before). We use [GeLU Fast](https://github.com/huggingface/transformers/blob/master/src/transformers/activations.py#L51) that is faster than the original GeLU implementation by providing all numerical values as they are already computed. 04. **FusedBiasDropout (torch.jit.script)** https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/transformer.py#L395 This is kernel that fuses `bias addition + dropout`. 05. **FusedBiasDropoutResidual (torch.jit.script)** https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/transformer.py#L395 This is kernel that fuses `bias addition + dropout + residual addition`. <p align="center"> <img src="https://user-images.githubusercontent.com/38183241/136689303-d4a6e9e6-6582-48c6-956b-479a4d969d57.png" width=360> <img src="https://user-images.githubusercontent.com/38183241/136689311-c3a3f22f-8a46-4f8b-95c2-b6505eda7384.png" width=360> The above images are the performance of `FusedGPT2MLP` made by combining `FusedBiasActivation` and `FusedBiasDropout`. These results show that two fused kernels can lead to a significant performance improvement over the original `GPT2Attention`. </p> 06. **FusedSplitHeads & FusedMergeHeads** (torch.jit.script) We tried just in time (JIT) compile the `view + permute + contiguous` performed to split or merge heads in the transformer attention layer, but there was no difference in speed. Probably because it is not the elementwise operations, the performance improvement is expected to be negligible. Therefore, we have decided not to provide this kernel. 07. **FusedCrossEntropy** (from lightseq) https://github.com/bytedance/lightseq/blob/master/lightseq/training/ops/pytorch/cross_entropy_layer.py This is the kernel that fuses `log_softmax + nll_loss`. However, when we tested this kernel, it was about 2 ~ 3 times slower than the original `torch.nn.CrossEntropyLoss`. Therefore, we have decided not to provide this kernel. See https://github.com/bytedance/lightseq/issues/204. ``` CrossEntropyLoss: 0.0004372596740722656 LSCrossEntropyLayer: 0.0010995864868164062 ``` 08. **FusedEmbedding** (from lightseq) https://github.com/bytedance/lightseq/blob/master/lightseq/training/ops/pytorch/transformer_embedding_layer.py This is the kernel that fuses `positional embedding + word embedding`. We were very interested in this kernel, but unfortunately positional embedding only supports sinusoidal method. Almost all models today don't use the sinusoidal method. Therefore, we have decided not to support this kernel. 09. **FusedNoRepeatNGramLogitsProcessor** (from fastseq) https://github.com/microsoft/fastseq/blob/main/fastseq/ops/ngram_repeat_block.py This is the kernel that performs `no repeat ngram blocking` on the GPU when generating text. As a result of the test, there is no significant impact when the text length is short, but it shows a very large performance improvement when the text length is long. So we modified `GenerationMixin` to include this kernel. you'll be able to use this kernel by `model.generate(..., no_repeat_ngram_size=n, fused_no_repeat_ngram_blocking=True)` later. ``` Generation Speed (sec / 500 tokens) non fusion: 10.293807029724121 module fusion: 8.77494215965271 module fusion + fused ngram: 8.045531034469604 layer fusion + fused ngram: 5.359241008758545 ``` I will also review layer-level kernels during this week. ;)<|||||>@stas00 @siddk I'm going to take a look at functorch this weekend and see how we can combine them to improve performance.<|||||>the tricky part is that it's tied to torch's version, e.g. I had to use pytorch-nightly to get it to work. Otherwise you can only use pt-1.10.0 which is not the latest release (1.10.1 is). In other words this would be quite complex for users to set up. To keep up with details please subscribe to: https://github.com/huggingface/transformers/pull/15264<|||||>@stas00 if you're interested, we have other fused layers in https://github.com/facebookresearch/xformers/tree/main/xformers/triton. The only dependency is triton, which is one pip install away (but limited to Cuda and recent enough GPUs). Just FYI, feel free to discard<|||||>@stas00 We'll be cutting a branch that works with PyTorch 1.11.0, and to be honest, I don't think it'd be that hard to cut a release for 1.10.1 now either. So, I think the issues with user setup are not that difficult to resolve.<|||||>> @stas00 if you're interested, we have other fused layers in https://github.com/facebookresearch/xformers/triton. The only dependency is triton, which is one pip install away (but limited to Cuda and recent enough GPUs). Just FYI, feel free to discard Thank you very much, Benjamin! I will tag @hyunwoongko - who is currently researching various fused kernels for him to see if these fit! He has probably already looked there/adopted some. <|||||>sounds good, Horace - let's then work with pt-nightly for now and then by the time we have something to show to users we will make sure they will have an easy pass to follow. Most likely pt-1.11.0 will be out by that time as you're saying. Thank you! <|||||>This looks really useful. Is there a more recent update on this somewhere?
transformers
13,844
open
How to tokenize big dataset
Based on examples, I am trying to train a tokenizer and a model for T5. I use Google Colab pro, when I tried to run the following code: ``` import datasets from t5_tokenizer_model import SentencePieceUnigramTokenizer vocab_size = 32_000 input_sentence_size = None # change to 100_000 works # Initialize a dataset dataset = datasets.load_dataset("oscar", name="unshuffled_deduplicated_fa", split="train") tokenizer = SentencePieceUnigramTokenizer(unk_token="<unk>", eos_token="</s>", pad_token="<pad>") print("len dataset:", len(dataset)) # Build an iterator over this dataset def batch_iterator(input_sentence_size=None): if input_sentence_size is None: input_sentence_size = len(dataset) batch_length = 100 for i in range(0, input_sentence_size, batch_length): yield dataset[i: i + batch_length]["text"] # Train tokenizer tokenizer.train_from_iterator( iterator=batch_iterator(input_sentence_size=input_sentence_size), vocab_size=vocab_size, show_progress=True, ) # Save files to disk tokenizer.save("/content/drive/MyDrive/Pouramini/tokenizer.json") ``` It get stuck in `train_from_iterator` because the size of dataset is large (`input_sentence_size` is around 8M sentences) How can I divide the dataset and run the code on each block and then merge them to a tokenizer output?
10-02-2021 12:58:11
10-02-2021 12:58:11
There is no implementation of anything like that yet - but thanks for the request, that's an interesting feature request cc @SaulLu @sgugger <|||||>@LysandreJik I believe there is. You can check out the guide here: https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling#train-tokenizer This guide does exactly what you're requesting: it splits up the dataset in batches, such that the tokenizer can be trained properly.<|||||>This is exactly what is done in the code sample above as well. I'm not too sure I understand what the feature request here is: the training does not get stuck, it just takes along time to finish. There are no progress bars in notebooks, which is a feature you can request on Tokenizers. At some point, training a tokenizer on such a large dataset in Colab is counter-productive, this environment is not appropriate for CPU intensive work like this. You should spin a CPU instance (those are very cheap) to train your tokenizer then upload the result to the Hub to re-use it once you are ready to train.<|||||>Thanks, anyway I trained the tokenzier, and the batch iterator could be a solution, however as @sgugger pointed out with no progress bar, that made me doubtful what is happening! but what about dataset? I meant this line ``` dataset = load_dataset("oscar", "unshuffled_deduplicated_no", split="train") ``` Does it load the whole dataset in memory and then calls batch iterator on it? if yes, then a batch iterator fro reading a dataset from a file could fix need for large memory.<|||||>A related issue with the same example code is this new one: https://github.com/huggingface/transformers/issues/13878#issue-1016415036 I would be very grateful if one answers me.<|||||>No the Datasets library never loads the samples unless you request them, using Apache Arrow behind the scenes (you cna read more in the [documentation](https://huggingface.co/docs/datasets/about_arrow.html)). Using the batch iterator as you did will never load the full dataset in memory.
transformers
13,843
closed
Question: Is that possible to embed a tokenizer into the model for tensorflow serving?
Hello! Sorry if this question is created in a wrong place, I didn't find the "question" template for issue request. So if it is not appropriate place please give me the direction where I can create this question. I already using TF-BERT model (uncased version) with tensorflow serving. I found that I need to modify some inputs to get something like that: ```python callable = tf.function(self.model.call) concrete_function = callable.get_concrete_function([ tf.TensorSpec([None, self.max_input_length], tf.int32, name="input_ids"), tf.TensorSpec([None, self.max_input_length], tf.int32, name="attention_mask") ]) self.model.save(save_directory, signatures=concrete_function) ``` Also I found the following example (https://github.com/huggingface/blog/blob/master/tf-serving.md), that allows me to change input signature of a model especially for serving: ```python from transformers import TFBertForSequenceClassification import tensorflow as tf # Creation of a subclass in order to define a new serving signature class MyOwnModel(TFBertForSequenceClassification): # Decorate the serving method with the new input_signature # an input_signature represents the name, the data type and the shape of an expected input @tf.function(input_signature=[{ "input_ids": tf.TensorSpec((None, None), tf.int32, name="input_ids"), "attention_mask": tf.TensorSpec((None, None), tf.int32, name="attention_mask"), "token_type_ids": tf.TensorSpec((None, None), tf.int32, name="token_type_ids"), }]) def serving(self, inputs): # call the model to process the inputs output = self.call(inputs) test_out = self.serving_output(output) # return the formated output return test_out # Instantiate the model with the new serving method model = MyOwnModel.from_pretrained("bert-base-cased") # save it with saved_model=True in order to have a SavedModel version along with the h5 weights. model.save_pretrained("/tmp/my_model6", saved_model=True) ``` In my current workflow I have a need of python because I have to prepare input for a model by using tokenizer. It does mean that for now I need to have a REST service that gets text request and then sends it to the serving instance. After I switched to GCP AI Platform I think that it is reasonable and worth to try to embed the tokenizer inside the model and let GCP AI Platform serving it. I did some tries and looks like that it more difficult than it looks like. The goal is to have the model with tokenizer on GCP AI platform and get rid of python REST API service because all other infrastructure is written using Erlang/Rust. I need to supply the text to the model serving instance (not the object with input_ids, attention_mask, etc.) and get logits. Or softmaxed logits. So could someone please answer is that possible and if it is possible to provide some guidance how to achieve this? Thanks. /Dmitriy
10-02-2021 12:15:21
10-02-2021 12:15:21
We generally favor the [forum](https://discuss.huggingface.co) for questions like these - pinging @Rocketknight1, our TF maintainer.<|||||>@LysandreJik thanks, I will move it there immediately.<|||||>Closing this issue and moving the discussion to https://discuss.huggingface.co/t/is-that-possible-to-embed-the-tokenizer-into-the-model-to-have-it-running-on-gcp-using-tensorflow-serving/10532 Thanks a lot!
transformers
13,842
closed
Multi-task learning for MLM and token classification
Hi, I have a question regarding multi-task learning: I would like to do domain adaptation by using masked language modeling task but also want to use token classification as an auxiliary task. For token classification, I want to predict for each token if it is a part of any entity or not (binary prediction.) I implemented a custom `BertForMultiTask` class and would like to ask if I am on the right track as I also couldn't find a similar example: I created this custom class for using two heads by following the example for `BertOnlyMLMHead`: ``` class BertMultiTaskHeads(nn.Module): def __init__(self, config): super().__init__() self.predictions = BertLMPredictionHead(config) # I add additional linear layer to do binary prediciton self.dropout = nn.Dropout(config.hidden_dropout_prob) self.binary_predictions = nn.Linear(config.hidden_size, 2) def forward(self, sequence_output): mlm_scores = self.predictions(sequence_output) # additional entity score for binary prediction entity_scores = self.binary_predictions(self.dropout(sequence_output)) return mlm_scores, entity_scores ``` And I created `BertForMultiTask` class by following `BertForMaskedLM` (I have previously created a new `BertMultiTaskOutput` class for returning also the entity prediction loss) : ``` class BertForMultiTask(BertPreTrainedModel): _keys_to_ignore_on_load_unexpected = [r"pooler"] def __init__(self, config): super().__init__(config) self.bert = BertModel(config, add_pooling_layer=False) self.cls = BertMultiTaskHeads(config) self.init_weights() def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, tag_labels=None, # For entity binary prediction, obtained as an additional feature output_attentions=None, output_hidden_states=None, return_dict=None, **kwargs ): return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = outputs[0] prediction_scores, label_scores = self.cls(sequence_output) total_loss = None if labels is not None and tag_labels is not None: loss_fct = nn.CrossEntropyLoss() masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) # additional entity prediction loss entity_prediction_loss = loss_fct(label_scores.view(-1, 2), tag_labels.view(-1)) total_loss = masked_lm_loss + entity_prediction_loss if not return_dict: output = (prediction_scores, label_scores) + outputs[2:] return ((total_loss,) + output) if total_loss is not None else output return BertMultiTaskOutput( loss=total_loss, prediction_logits=prediction_scores, entity_prediction_logits=label_scores, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) ``` Thank you very much in advance!
10-02-2021 11:34:19
10-02-2021 11:34:19
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Regarding your question: I don't see anything wrong with your implementation! You might be interested in taking a look at the `BertForPreTraining` class, which does both MLM + NSP tasks for inspiration Thanks!<|||||>Thank you for the answer and the insight! And sure, I will do that for those who might have a similar question.
transformers
13,841
closed
How can one use newly added GPTJ6B model for finetuning?
I want to finetune GPTJ6B for summarization and question answering. Are there any guidelines on how to do that?
10-02-2021 11:16:04
10-02-2021 11:16:04
From the [docs](https://huggingface.co/transformers/master/model_doc/gptj.html): > The model should fit on 16GB GPU for inference. For training/fine-tuning it would take much more GPU RAM. Adam optimizer for example makes four copies of the model: model, gradients, average and squared average of the gradients. So it would need at least 4x model size GPU memory, even with mixed precision as gradient updates are in fp32. This is not including the activations and data batches, which would again require some more GPU RAM. So one should explore solutions such as DeepSpeed, to train/fine-tune the model. Another option is to use the original codebase to train/fine-tune the model on TPU and then convert the model to Transformers format for inference. Instructions for that could be found [here](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/howto_finetune.md).<|||||>@NielsRogge The problem is original repo uses TPU so one needs Cloud storage. (Not feasible for many people). That's why I was trying to do it with huggingface repo. The section you pointed out, the requirement about RAM isn't it possible to run in Colab Pro? If yes I just want to fine tune with QA and Summarization task separately. I am looking for some concrete code example on how to get started with that. <|||||>Unfortunately, Colab Pro is not enough, as you need at least 108 GB of GPU RAM (this is when using DeepSpeed). You can estimate the memory required using the [following utility](https://deepspeed.readthedocs.io/en/stable/memory.html): ``` $ python -c 'from transformers import AutoModel; \ from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \ model = AutoModel.from_pretrained("EleutherAI/gpt-j-6B"); \ estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1)' Estimated memory needed for params, optim states and gradients for a: HW: Setup with 1 node, 1 GPU per node. SW: Model with 5844M total params, 206M largest layer params. per CPU | per GPU | Options 146.96GB | 0.77GB | cpu_offload=1, cpu_offload_params=1, zero_init=1 146.96GB | 0.77GB | cpu_offload=1, cpu_offload_params=1, zero_init=0 130.63GB | 11.66GB | cpu_offload=1, cpu_offload_params=0, zero_init=1 130.63GB | 11.66GB | cpu_offload=1, cpu_offload_params=0, zero_init=0 1.15GB | 98.74GB | cpu_offload=0, cpu_offload_params=0, zero_init=1 32.66GB | 98.74GB | cpu_offload=0, cpu_offload_params=0, zero_init=0 ```<|||||>@NielsRogge Sorry to get back to this a bit later. So is there any resource one can use to finetune it for QA and summary? (Will Colab Pro+ be enough?)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,840
closed
Multilingual T5 is not adding new tokens
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.2 - Platform:google colab - Python version:3.7.12 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.6.0 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?:No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ I am using mT5 model. The problem arised when I am trying to resize the token embeddings. I added new special tokens to my tokenizer's vocabulary, when I resized the model's token embeddings according to the length of the tokenizer, I am facing the issue. Link: https://colab.research.google.com/drive/1ooKa5aQ_FAEnicxBL8bJnNAO4H9vFuv-?usp=sharing Code from the notebook: !pip install transformers !pip install sentencepiece from transformers import TFMT5ForConditionalGeneration, T5Tokenizer model = TFMT5ForConditionalGeneration.from_pretrained("google/mt5-base") tokenizer = T5Tokenizer.from_pretrained("google/mt5-base") tokenizer.add_special_tokens({'bos_token':'','eos_token':''}) model._resize_token_embeddings(len(tokenizer)) The error message is: ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor. --> ## Information The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1.You can directly run the code & check the error coming from the last line. 2.Also you can go the colab notebook & have a look at the error as well.
10-02-2021 06:21:22
10-02-2021 06:21:22
related #13840<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,839
closed
TF mT5 model is not adding new tokens into it's vocabulary.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.2 - Platform: google colab - Python version: 3.7.12 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.6.0 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @patil-suraj <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): I am using mT5 model. The problem arised when I am trying to resize the token embeddings. I added new special tokens to my tokenizer's vocabulary, when I resized the model's token embeddings according to the length of the tokenizer, I am facing the issue. Link for the notebook: https://colab.research.google.com/drive/1ooKa5aQ_FAEnicxBL8bJnNAO4H9vFuv-?usp=sharing Code from the notebook: !pip install transformers !pip install sentencepiece from transformers import TFMT5ForConditionalGeneration, T5Tokenizer model = TFMT5ForConditionalGeneration.from_pretrained("google/mt5-base") tokenizer = T5Tokenizer.from_pretrained("google/mt5-base") tokenizer.add_special_tokens({'bos_token':'<bos>','eos_token':'<eos>'}) model._resize_token_embeddings(len(tokenizer)) The error message is: ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor. The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. You can directly run the code & check the error coming from the last line. 2. Also you can go the colab notebook & have a look at the error as well. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
10-02-2021 05:52:49
10-02-2021 05:52:49
This seems to be an issue with TF model cc @Rocketknight1 @patrickvonplaten it seems there are no extra tokens in mt5 tokenizer and there's a mismatch between `tokenizer.vocab_size` and `config.vocab_size`. Is this a known issue?<|||||>Actually, everything looks fine for me here... TFMT5Model and MT5Tokenier have a tokenizer mismatch because of the same reason as T5, see: https://github.com/huggingface/transformers/issues/4875 Apart from this the following code works: ```python from transformers import TFMT5ForConditionalGeneration, T5Tokenizer model = TFMT5ForConditionalGeneration.from_pretrained("google/mt5-base") tokenizer = T5Tokenizer.from_pretrained("google/mt5-base") tokenizer.add_special_tokens({'bos_token':'','eos_token':''}) model.resize_token_embeddings(len(tokenizer)) ``` @laddhakrishna - is there any reason you used `model._resize_token_embeddings(...)` instead of `model.resize_token_embeddings(...)` ? <|||||>Sir even if we use --> model.resize_token_embeddings(...) ,still we are getting the same error. What can we do??<|||||>Hey @laddhakrishna, could you provide a google colab that reproduces the error with `model.resize_token_embeddings(...)`? I didn't manage to reproduce the error. Thanks!<|||||>Colab notebook link: https://colab.research.google.com/drive/1xSB7XlIgA7PrGTUqThZl-rkBknxLYc87?usp=sharing Please have a look at it sir. Thanks!<|||||>Hey @laddhakrishna, Thanks for the colab I can reproduce!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Maybe @Rocketknight1 or @gante can take a look?
transformers
13,838
closed
AttributeError: 'TFDistilBertForSequenceClassification' object has no attribute 'to'
## Environment info - `transformers` version: 4.11.2 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: YES - Using distributed or parallel set-up in script?: PARALLEL - https://huggingface.co/transformers/custom_datasets.html ### Who can help - trainer: @sgugger Examples: ## Information Model I am using: TFDistilBertForSequenceClassification The problem arises when using: * [ ] the official example scripts: https://colab.research.google.com/github/huggingface/notebooks/blob/master/transformers_doc/custom_datasets.ipynb ## To reproduce Steps to reproduce the behavior: 1. Run the tensorflow cells 2. See `AttributeError: 'TFDistilBertForSequenceClassification' object has no attribute 'to'` when training ## Expected behavior I expect `model.to()` to work in that cell doing the training. Excuse me for not providing a stack trace. I am trying to re-run the colab but stanford is not serving the imdb dataset
10-02-2021 04:04:00
10-02-2021 04:04:00
That notebook has a PyTorch and a TensorFlow version. If you take the mixed one, you will need to execute only the relevant cells.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,837
closed
Onnx support for Zero Shot Classification Pipeline
Hi Thanks for creating this library. I am using the Zero Shot Classification Pipeline for my use cases which runs for millions of sentences and I currently use a gpu for faster performance. Onnx currently supports following pipelines- SUPPORTED_PIPELINES = [ "feature-extraction", "ner", "sentiment-analysis", "fill-mask", "question-answering", "text-generation", "translation_en_to_fr", "translation_en_to_de", "translation_en_to_ro", ] It would be helpful if I could use onnx runtime to deploy the pipeline. Hence asking for support for Zero Shot Classification Pipeline.
10-02-2021 02:29:37
10-02-2021 02:29:37
Hello @subhamkhemka, we're moving away from pipeline support in ONNX to support a more easily extendable setup relying on the model themselves. Could you take a look at the new approach and let us know if it works for you? https://huggingface.co/transformers/serialization.html#configuration-based-approach<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik Zero Shot Classification is a little bit tricky, is there an example of ONNX configuration for Zero Shot Classification? Thanks
transformers
13,836
closed
Improve error message when loading models from Hub
# What does this PR do? Improves the error message when loading a model from the Hub that is the same name as a local directory. I encountered an issue where I couldn't load `gpt2` from the Hub, and it turned out to be because there was a local directory named `gpt2` that was not the saved model. I do not seem to be the only person who's discovered this issue, as can be seen by <https://github.com/huggingface/transformers/issues/8661>, <https://github.com/huggingface/transformers/issues/10918#issuecomment-877807753>, and <https://github.com/huggingface/transformers/issues/13073>. I therefore decided to improve the error message to make the problem easier to catch. I made the change everywhere in the codebase I could find. I'm not entirely satisfied with the new wording, but trying to completely describe the name collision problem in a short message is difficult. I'd appreciate any suggestions. One thought I had was to swap the order of the first two bullets to make it more clear that local files and directories have priorities, but I wasn't sure whether that would really help. ## Who can review? - tokenizers: @n1t0, @LysandreJik
10-01-2021 22:07:17
10-01-2021 22:07:17
@sgugger, I wasn't fond of the message either, so I applied your new wording to all the files. Thanks for the suggestion!
transformers
13,835
closed
Caching strategy for tokenizers to help achieve 1.46 ms inference on BERT QA on CPU
# 🚀 Feature request I managed to achieve 1.46 ms inference for BERT QA (bert-large-uncased-whole-word-masking-finetuned-squad) on a laptop with Intel i7–10875H on RedisAI during RedisLabs RedisConf 2021 hackathon (finished 15th May), CPU only inference. [Blog write up](https://alexmikhalev.medium.com/bert-models-with-millisecond-inference-the-pattern-project-409b2880524d) while current deployment heavily relies on Redis as you can imagine, it was build during RedisConf Hackathon, I believe the learnings can be incorporated into the core transformers library and speed up inference for NLP tasks. ## Motivation For NLP tasks like BERT QA inference, it's not trivial to load GPU for over 50% - normally tokenization is CPU intensive, while inference can be GPU intensive. It's very easy to cache summarization responses, but QA requires user input and more fiddling. ## Proposal In summary: As a starting point - incorporate cache as a first-class citizen for all tokenisers, probably via decorators. Redis (and Redis Cluster) is a good candidate for cache with in-build sharding and Redis node in cluster configuration occupies 20 MB RAM. I used additional modules RedisGears (distributed compute) and RedisAI - store tensors and run inference(pytorch). Implementation details: It's part of QAsearch API "medium complexity project" [The Pattern](http://thepattern.digital/). Full [code](https://github.com/applied-knowledge-systems/the-pattern). 1. Convert and pre-load BERT models on each shard of RedisCluster ([code](https://github.com/applied-knowledge-systems/the-pattern-api/blob/9387a1a9c7c07af108f968cb42bc8a75b587479b/qasearch/export_load_bert.py)) 2. Pre-tokenise all potential answers using RedisGears and distribute potential answers on each shard of Redis Cluster using RedisGears(code for [batch](https://github.com/applied-knowledge-systems/the-pattern-api/blob/9387a1a9c7c07af108f968cb42bc8a75b587479b/qasearch/tokeniser_gears_redisai.py) and for [event-based](https://github.com/applied-knowledge-systems/the-pattern-api/blob/9387a1a9c7c07af108f968cb42bc8a75b587479b/qasearch/tokeniser_gears_redisai_register.py) RedisGears function) 3. Amend calling API to direct question query to shard with most likely answers. [Code](https://github.com/applied-knowledge-systems/the-pattern-api/blob/9387a1a9c7c07af108f968cb42bc8a75b587479b/app.py#L238). The call is using graph-based ranking and zrangebyscore to find the most ranked sentences in response to the question and then gets relevant hashtag from sentence key 4. Tokenise question. [Code](https://github.com/applied-knowledge-systems/the-pattern-api/blob/9387a1a9c7c07af108f968cb42bc8a75b587479b/qasearch/qa_redisai_gear_map_keymiss_np.py#L51). Tokenisation happening on shard and uses RedisGears and RedisAI integration via `import redisAI` 6. Concatenate user question and pre-tokenised potential answers. [Code](https://github.com/applied-knowledge-systems/the-pattern-api/blob/9387a1a9c7c07af108f968cb42bc8a75b587479b/qasearch/qa_redisai_gear_map_keymiss_np.py#L62) 7. Run inference using RedisAI. [Code](https://github.com/applied-knowledge-systems/the-pattern-api/blob/9387a1a9c7c07af108f968cb42bc8a75b587479b/qasearch/qa_redisai_gear_map_keymiss_np.py#L70) model run in async mode without blocking the main Redis threat, so shard can still serve users 8. Select the answer using max score and convert tokens to words. [Code](https://github.com/applied-knowledge-systems/the-pattern-api/blob/9387a1a9c7c07af108f968cb42bc8a75b587479b/qasearch/qa_redisai_gear_map_keymiss_np.py#L75) 9. Cache the answer using Redis — next hit on API with the same question returns the answer in nanoseconds.[Code](https://github.com/applied-knowledge-systems/the-pattern-api/blob/9387a1a9c7c07af108f968cb42bc8a75b587479b/qasearch/qa_redisai_gear_map_keymiss_np.py#L21) this function uses ‘keymiss’ event. The code was working in May and uses transformers==4.4.2. Even with all casting into NumPy arrays for RedisAI inference, it was running under 2 ms on Clevo laptop with Intel(R) Core(TM) i7–10875H CPU @ 2.30GHz, 64 GB RAM, SSD. When API calls hit cache response time under 600 ns.
10-01-2021 20:46:58
10-01-2021 20:46:58
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Numbers for illustration: Pure transformers BERT QA i7 Laptop CPU: 10.871693504 seconds single run (inference) vs 398.41 requests per second. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.