repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
1,789
closed
BertForMultipleChoice QuickTour issue with weights?
## 🐛 Bug <!-- Important information --> Model I am using (BertForMultipleChoice): Language I am using the model on (English.): The problem arise when using: * [x] the official example scripts: Arises when running through the last piece of example code found here: https://github.com/huggingface/transformers#quick-tour * [ ] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. Run the sample code from the last loop of the Quick Tour 2. Need to supply directory names, see my code (below) for how I did that... 3. When BertForMultipleChoice runs the line `reshaped_logits = logits.view(-1, num_choices)` in modeling_bert.py we get a runtime error `RuntimeError: shape '[-1, 16]' is invalid for input of size 1` <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ``` # Each architecture is provided with several class for fine-tuning on down-stream tasks, e.g. BERT_MODEL_CLASSES = [BertModel, BertForPreTraining, BertForMaskedLM, BertForNextSentencePrediction, BertForSequenceClassification, BertForMultipleChoice, BertForTokenClassification, BertForQuestionAnswering] ``` ``` # All the classes for an architecture can be initiated from pretrained weights for this architecture # Note that additional weights added for fine-tuning are only initialized # and need to be trained on the down-stream task pretrained_weights = 'bert-base-uncased' tokenizer = BertTokenizer.from_pretrained(pretrained_weights) for model_class in BERT_MODEL_CLASSES: print("Processing", model_class.__name__, "...") # Store class name as target directory model_dir_name = model_class.__name__+"/" # Load pretrained model/tokenizer model = model_class.from_pretrained(pretrained_weights) # Models can return full list of hidden-states & attentions weights at each layer model = model_class.from_pretrained(pretrained_weights, output_hidden_states=True, output_attentions=True) input_ids = torch.tensor([tokenizer.encode("Let's see all hidden-states and attentions on this text")]) all_hidden_states, all_attentions = model(input_ids)[-2:] # Models are compatible with Torchscript model = model_class.from_pretrained(pretrained_weights, torchscript=True) traced_model = torch.jit.trace(model, (input_ids,)) save_directory = 'BERT_test/'+ model_dir_name if not os.path.isdir(save_directory): os.mkdir(save_directory) # Simple serialization for models and tokenizers model.save_pretrained(save_directory) # save model = model_class.from_pretrained(save_directory) # re-load tokenizer.save_pretrained(save_directory) # save tokenizer = BertTokenizer.from_pretrained(save_directory) # re-load # SOTA examples for GLUE, SQUAD, text generation... ``` The error: ``` I1111 20:47:30.383128 21676 modeling_utils.py:383] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin from cache at C:\Users\User\.cache\torch\transformers\aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157 I1111 20:47:33.363116 21676 modeling_utils.py:453] Weights of BertForMultipleChoice not initialized from pretrained model: ['classifier.weight', 'classifier.bias'] I1111 20:47:33.365122 21676 modeling_utils.py:456] Weights from pretrained model not used in BertForMultipleChoice: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-10-da6597bd484d> in <module>() 19 output_attentions=True) 20 input_ids = torch.tensor([tokenizer.encode("Let's see all hidden-states and attentions on this text")]) ---> 21 all_hidden_states, all_attentions = model(input_ids)[-2:] 22 23 # Models are compatible with Torchscript G:\Anaconda3\envs\pytorch1\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) g:\deeplearning\huggingface\transformers\transformers\modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels) 1096 pooled_output = self.dropout(pooled_output) 1097 logits = self.classifier(pooled_output) -> 1098 reshaped_logits = logits.view(-1, num_choices) 1099 1100 outputs = (reshaped_logits,) + outputs[2:] # add hidden states and attention if they are here RuntimeError: shape '[-1, 16]' is invalid for input of size 1 ``` ## Expected behavior Should run without error! <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: Windows 10 * Python version: 3.6.6 * PyTorch version: 1.3.0 * PyTorch Transformers version (or branch): 2.1.1 * Using GPU ? Yes * Distributed of parallel setup ? No * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
11-11-2019 10:34:28
11-11-2019 10:34:28
The same error occurs to me too. I write a small check code for BertForMultipleChoice and it works as expected (taken from the [documentation](https://github.com/huggingface/transformers/blob/albert/transformers/modeling_bert.py) - rows 945-951). Here the code I wrote. ``` tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForMultipleChoice.from_pretrained('bert-base-uncased', output_hidden_states=True, output_attentions=True) choices = ["Hello, my dog is cute", "Hello, my cat is amazing"] input_ids = torch.tensor([tokenizer.encode(s) for s in choices]).unsqueeze(0) # Batch size 1, 2 choices labels = torch.tensor(1).unsqueeze(0) # Batch size 1 outputs = model(input_ids, labels=labels) loss, classification_scores, hidden_states, attentions = outputs ``` > ## Bug > Model I am using (BertForMultipleChoice): > > Language I am using the model on (English.): > > The problem arise when using: > > * [x] the official example scripts: > Arises when running through the last piece of example code found here: > https://github.com/huggingface/transformers#quick-tour > * [ ] my own modified scripts: (give details) > > The tasks I am working on is: > > * [ ] an official GLUE/SQUaD task: (give the name) > * [ ] my own task or dataset: (give details) > > ## To Reproduce > Steps to reproduce the behavior: > > 1. Run the sample code from the last loop of the Quick Tour > 2. Need to supply directory names, see my code (below) for how I did that... > 3. When BertForMultipleChoice runs the line `reshaped_logits = logits.view(-1, num_choices)` in modeling_bert.py we get a runtime error `RuntimeError: shape '[-1, 16]' is invalid for input of size 1` > > ``` > # Each architecture is provided with several class for fine-tuning on down-stream tasks, e.g. > BERT_MODEL_CLASSES = [BertModel, BertForPreTraining, BertForMaskedLM, BertForNextSentencePrediction, > BertForSequenceClassification, BertForMultipleChoice, BertForTokenClassification, > BertForQuestionAnswering] > ``` > > ``` > # All the classes for an architecture can be initiated from pretrained weights for this architecture > # Note that additional weights added for fine-tuning are only initialized > # and need to be trained on the down-stream task > pretrained_weights = 'bert-base-uncased' > tokenizer = BertTokenizer.from_pretrained(pretrained_weights) > for model_class in BERT_MODEL_CLASSES: > > print("Processing", model_class.__name__, "...") > > # Store class name as target directory > model_dir_name = model_class.__name__+"/" > > # Load pretrained model/tokenizer > model = model_class.from_pretrained(pretrained_weights) > > # Models can return full list of hidden-states & attentions weights at each layer > model = model_class.from_pretrained(pretrained_weights, > output_hidden_states=True, > output_attentions=True) > input_ids = torch.tensor([tokenizer.encode("Let's see all hidden-states and attentions on this text")]) > all_hidden_states, all_attentions = model(input_ids)[-2:] > > # Models are compatible with Torchscript > model = model_class.from_pretrained(pretrained_weights, torchscript=True) > traced_model = torch.jit.trace(model, (input_ids,)) > > save_directory = 'BERT_test/'+ model_dir_name > if not os.path.isdir(save_directory): > os.mkdir(save_directory) > > # Simple serialization for models and tokenizers > model.save_pretrained(save_directory) # save > model = model_class.from_pretrained(save_directory) # re-load > tokenizer.save_pretrained(save_directory) # save > tokenizer = BertTokenizer.from_pretrained(save_directory) # re-load > > # SOTA examples for GLUE, SQUAD, text generation... > ``` > > The error: > > ``` > I1111 20:47:30.383128 21676 modeling_utils.py:383] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin from cache at C:\Users\User\.cache\torch\transformers\aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157 > I1111 20:47:33.363116 21676 modeling_utils.py:453] Weights of BertForMultipleChoice not initialized from pretrained model: ['classifier.weight', 'classifier.bias'] > I1111 20:47:33.365122 21676 modeling_utils.py:456] Weights from pretrained model not used in BertForMultipleChoice: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] > --------------------------------------------------------------------------- > RuntimeError Traceback (most recent call last) > <ipython-input-10-da6597bd484d> in <module>() > 19 output_attentions=True) > 20 input_ids = torch.tensor([tokenizer.encode("Let's see all hidden-states and attentions on this text")]) > ---> 21 all_hidden_states, all_attentions = model(input_ids)[-2:] > 22 > 23 # Models are compatible with Torchscript > > G:\Anaconda3\envs\pytorch1\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) > 539 result = self._slow_forward(*input, **kwargs) > 540 else: > --> 541 result = self.forward(*input, **kwargs) > 542 for hook in self._forward_hooks.values(): > 543 hook_result = hook(self, input, result) > > g:\deeplearning\huggingface\transformers\transformers\modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels) > 1096 pooled_output = self.dropout(pooled_output) > 1097 logits = self.classifier(pooled_output) > -> 1098 reshaped_logits = logits.view(-1, num_choices) > 1099 > 1100 outputs = (reshaped_logits,) + outputs[2:] # add hidden states and attention if they are here > > RuntimeError: shape '[-1, 16]' is invalid for input of size 1 > ``` > > ## Expected behavior > Should run without error! > > ## Environment > * OS: Windows 10 > * Python version: 3.6.6 > * PyTorch version: 1.3.0 > * PyTorch Transformers version (or branch): 2.1.1 > * Using GPU ? Yes > * Distributed of parallel setup ? No > * Any other relevant information: > > ## Additional context<|||||>Thanks! This works for me too. But since the Quick Tour is an example on the github repo I believe it would be a good idea to fix it there... Is there anything you can see wrong with the code?<|||||>Re-opened as I believe we should have the sample code fixed - *and* like [this issue](https://github.com/huggingface/transformers/issues/1787) it should work with Pytorch >= 1.0.0 or at least Pytorch >= 1.2.0 not just with >= 1.3.0<|||||>Indeed thanks, it's fixed now on master<|||||>Thanks - I note that it's been fixed by removing the use of BertForMultipleChoice in the evaluation section!
transformers
1,788
closed
BertForNextSentencePrediction is giving high score for non similar sentences .
## ❓ Questions & Help <!-- A clear and concise description of the question. --> import torch from transformers import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction tokenizer=BertTokenizer.from_pretrained('bert-base-uncased') BertNSP=BertForNextSentencePrediction.from_pretrained('bert-base-uncased') text1 = "How old are you?" text2 = "The Eiffel Tower is in Paris" text1_toks = ["[CLS]"] + tokenizer.tokenize(text1) + ["[SEP]"] text2_toks = tokenizer.tokenize(text2) + ["[SEP]"] text=text1_toks+text2_toks print(text) indexed_tokens = tokenizer.convert_tokens_to_ids(text1_toks + text2_toks) segments_ids = [0]*len(text1_toks) + [1]*len(text2_toks) tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) print(indexed_tokens) print(segments_ids) BertNSP.eval() prediction = BertNSP(tokens_tensor, segments_tensors) prediction=prediction[0] # tuple to tensor print(predictions) softmax = torch.nn.Softmax(dim=1) prediction_sm = softmax(prediction) print (prediction_sm) o/p of predictions tensor([[ 2.1772, -0.8097]], grad_fn=<AddmmBackward>) o/p of prediction_sm tensor([[0.9923, 0.0077]], grad_fn=<SoftmaxBackward>) why is the score still high 0.9923 even after apply softmax ?
11-11-2019 10:32:53
11-11-2019 10:32:53
As explained in #1790, you're passing the `token_type_ids` as the attention mask. Change the model forward pass as such: ```py prediction = BertNSP(tokens_tensor, token_type_ids=segments_tensors) ``` Your results will be more accurate: ```py tensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>) tensor([[4.1673e-04, 9.9958e-01]], grad_fn=<SoftmaxBackward>) ```<|||||>@LysandreJik thanks for the information .Dose this apply for all the bert models or only for the next sentence prediction alone ?<|||||>It would be better to use named arguments in all the models. They are bound to change with breaking changes as new versions come up. I recommend specifying the arguments' names when possible.<|||||>> As explained in #1790, you're passing the `token_type_ids` as the attention mask. Change the model forward pass as such: > > ```python > prediction = BertNSP(tokens_tensor, token_type_ids=segments_tensors) > ``` > > Your results will be more accurate: > > ```python > tensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>) > tensor([[4.1673e-04, 9.9958e-01]], grad_fn=<SoftmaxBackward>) > ``` Hi, @LysandreJik Can you explain me what are the scores for? What is the 4.1673e-04 used for and what is the 9.9958e-01 used for? Which one of them says that: X% sure that A sentence is followed by B sentence ? And what is the other for? Thanks!<|||||>I'm having the same problem, and I think I've followed all the directions mentioned above. def line_continues(model, tokenizer, line1, line2): line1 = [tokenizer.cls_token] + tokenizer.tokenize(line1) + [tokenizer.sep_token] line2 = tokenizer.tokenize(line2) + [tokenizer.sep_token] input_idx = tokenizer.convert_tokens_to_ids(line1 + line2) segment_idx = [0]*len(line1) + [1]*len(line2) tokens_tensor = torch.tensor([input_idx]) segment_tensor = torch.tensor([segment_idx]) predictions = model(tokens_tensor, token_type_ids=segment_tensor) probs = F.softmax(predictions[0], dim=1) return probs[0][0] model = BertForNextSentencePrediction.from_pretrained('bert-base-cased') model.eval() random sentences: line1='these articles tell us about where leadership communication is going and where it' line2='issues gave us the chance to engage with many well-established and emerging experts' prob = line_continues(model, tokenizer, line1, line2) 0.9993 contiguous sentences: line1='these articles tell us about where leadership communication is going and where it' line2='needs to go in addition to using the model.' prob = line_continues(model, tokenizer, line1, line2) 0.9991 Thanks!<|||||>I am also experiencing this kind of issue. After experimenting with some sequence pairs, I think the relation between two sequence should be zero (and also non-sensical). For example: ``` Sent1: Paris is the capital of France. Sent2: Cow is a domestic animal. ``` This level of stupid sequence. if there is any slightest connection on word level, it outputs that it is 90% sure that two sequence is coherent. In your example, may be `leadership` and `experts` are two near words in semantic space. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This is still a problem and should be re-opened. From the documentation: https://huggingface.co/transformers/model_doc/bert.html#bertfornextsentenceprediction ``` # documentation example - good In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced. The sky is blue due to the shorter wavelength of blue light. logits[0, 0]: -3.072946548461914, logits[0, 1]: 5.905644416809082, is random: True # my own example - ??? I took my money to the bank on 23rd street My monkey was cake and cockroaches have radiation logits[0, 0]: 3.0128183364868164, logits[0, 1]: -1.984398365020752, is random: False ``` I'm not sure how to interpret this.<|||||>I think that it would also depend on train data on which the BERT model was trained on (from the paper " For the pre-training corpus we use the BooksCorpus (800M words) (Zhu et al., 2015) and English Wikipedia (2,500M words)."[1]. [1] - https://arxiv.org/pdf/1810.04805.pdf
transformers
1,787
closed
Invalid argument with CTRLModel
## 🐛 Bug <!-- Important information --> Model I am using (CTRLModel): Language I am using the model on (English): The problem arise when using: * [X ] the official example scripts: (give details) The tasks I am working on is: * [X ] the Quick Tour for transformers ## To Reproduce Steps to reproduce the behavior: 1. run the quick tour as found here https://github.com/huggingface/transformers#quick-tour 2. in the loop `for model_class, tokenizer_class, pretrained_weights in MODELS` it will crash out when processing the CTRLModel ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-5-25349833af91> in <module> 5 6 tokenizer = tokenizer_class.from_pretrained(pretrained_weights) ----> 7 model = model_class.from_pretrained(pretrained_weights) 8 9 # Encode text g:\deeplearning\huggingface\transformers\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 389 390 if state_dict is None and not from_tf: --> 391 state_dict = torch.load(resolved_archive_file, map_location='cpu') 392 393 missing_keys = [] G:\Anaconda3\envs\fastai\lib\site-packages\torch\serialization.py in load(f, map_location, pickle_module, **pickle_load_args) 384 f = f.open('rb') 385 try: --> 386 return _load(f, map_location, pickle_module, **pickle_load_args) 387 finally: 388 if new_fd: G:\Anaconda3\envs\fastai\lib\site-packages\torch\serialization.py in _load(f, map_location, pickle_module, **pickle_load_args) 578 for key in deserialized_storage_keys: 579 assert key in deserialized_objects --> 580 deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly) 581 if offset is not None: 582 offset = f.tell() OSError: [Errno 22] Invalid argument ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## What should have happened Should have run to completion... <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: * Python version: 3.3.9 [EDIT] - type on the Python, its 3.6.9. * PyTorch version: 1.2.0+cu92 * PyTorch Transformers version (or branch): 2.1.1 * Using GPU ? Yes * Distributed of parallel setup ? no * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
11-11-2019 08:21:15
11-11-2019 08:21:15
> ## Bug > Model I am using (CTRLModel): > > Language I am using the model on (English): > > The problem arise when using: > > * [X ] the official example scripts: (give details) > > The tasks I am working on is: > > * [X ] the Quick Tour for transformers > > ## To Reproduce > Steps to reproduce the behavior: > > 1. run the quick tour as found here https://github.com/huggingface/transformers#quick-tour > 2. in the loop `for model_class, tokenizer_class, pretrained_weights in MODELS` it will crash out when processing the CTRLModel > > ``` > --------------------------------------------------------------------------- > OSError Traceback (most recent call last) > <ipython-input-5-25349833af91> in <module> > 5 > 6 tokenizer = tokenizer_class.from_pretrained(pretrained_weights) > ----> 7 model = model_class.from_pretrained(pretrained_weights) > 8 > 9 # Encode text > > g:\deeplearning\huggingface\transformers\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) > 389 > 390 if state_dict is None and not from_tf: > --> 391 state_dict = torch.load(resolved_archive_file, map_location='cpu') > 392 > 393 missing_keys = [] > > G:\Anaconda3\envs\fastai\lib\site-packages\torch\serialization.py in load(f, map_location, pickle_module, **pickle_load_args) > 384 f = f.open('rb') > 385 try: > --> 386 return _load(f, map_location, pickle_module, **pickle_load_args) > 387 finally: > 388 if new_fd: > > G:\Anaconda3\envs\fastai\lib\site-packages\torch\serialization.py in _load(f, map_location, pickle_module, **pickle_load_args) > 578 for key in deserialized_storage_keys: > 579 assert key in deserialized_objects > --> 580 deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly) > 581 if offset is not None: > 582 offset = f.tell() > > OSError: [Errno 22] Invalid argument > ``` > > ## What should have happened > Should have run to completion... > > ## Environment > * OS: > * Python version: 3.3.9 > * PyTorch version: 1.2.0+cu92 > * PyTorch Transformers version (or branch): 2.1.1 > * Using GPU ? Yes > * Distributed of parallel setup ? no > * Any other relevant information: > > ## Additional context Can you test the same code with Python version >= 3.5 and with PyTorch version 1.3.0?<|||||>Sure, l'll get back to you... but I realize that I had a typo with my Python, its 3.6.9, which I've corrected. Additionally, I had compiled this with `pip install -e .` from a fresh clone of the repo yesterday<|||||>OK, under Pytorch 1.3.0 it seems good - I got the following output (which I wasn't getting from 1.2.0): ``` I1111 20:26:06.461704 21676 tokenization_utils.py:374] loading file https://raw.githubusercontent.com/salesforce/ctrl/master/ctrl-vocab.json from cache at C:\Users\User\.cache\torch\transformers\a858ad854d3847b02da3aac63555142de6a05f2a26d928bb49e881970514e186.285c96a541cf6719677cfb634929022b56b76a0c9a540186ba3d8bbdf02bca42 I1111 20:26:06.464703 21676 tokenization_utils.py:374] loading file https://raw.githubusercontent.com/salesforce/ctrl/master/ctrl-merges.txt from cache at C:\Users\User\.cache\torch\transformers\aa2c569e6648690484ade28535a8157aa415f15202e84a62e82cc36ea0c20fa9.26153bf569b71aaf15ae54be4c1b9254dbeff58ca6fc3e29468c4eed078ac142 I1111 20:26:07.624670 21676 configuration_utils.py:152] loading configuration file https://storage.googleapis.com/sf-ctrl/pytorch/ctrl-config.json from cache at C:\Users\User\.cache\torch\transformers\d6492ca334c2a4e079f43df30956acf935134081b2b3844dc97457be69b623d0.1ebc47eb44e70492e0c20494a084f108332d20fea7fe5ad408ef5e7a8f2baef4 I1111 20:26:07.628657 21676 configuration_utils.py:169] Model config { "attn_pdrop": 0.1, "dff": 8192, "embd_pdrop": 0.1, "finetuning_task": null, "from_tf": false, "initializer_range": 0.02, "is_decoder": false, "layer_norm_epsilon": 1e-06, "n_ctx": 512, "n_embd": 1280, "n_head": 16, "n_layer": 48, "n_positions": 50000, "num_labels": 1, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pruned_heads": {}, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "torchscript": false, "use_bfloat16": false, "vocab_size": 246534 } I1111 20:26:08.047282 21676 modeling_utils.py:383] loading weights file https://storage.googleapis.com/sf-ctrl/pytorch/seqlen256_v1.bin from cache at C:\Users\User\.cache\torch\transformers\c146cc96724f27295a0c3ada1fbb3632074adf87e9aef8269e44c9208787f8c8.b986347cbab65fa276683efbb9c2f7ee22552277bcf6e1f1166557ed0852fdf0 W1111 20:31:14.310047 21676 tokenization_utils.py:936] This tokenizer does not make use of special tokens. Input is returned with no modification. W1111 20:31:14.650039 21676 tokenization_utils.py:936] This tokenizer does not make use of special tokens. Input is returned with no modification. W1111 20:31:14.653039 21676 tokenization_utils.py:923] This tokenizer does not make use of special tokens. ```<|||||>[Here](https://colab.research.google.com/drive/1d23JwkEkS_s6XUY2_Me_MHnOrTHmNrqR#scrollTo=K4-yYInLt6YP) you can view a Google Colab written by me and **it works as expected**. I think you can close this issue. > OK, under Pytorch 1.3.0 it seems good - I got the following output (which I wasn't getting from 1.2.0): > > ``` > I1111 20:26:06.461704 21676 tokenization_utils.py:374] loading file https://raw.githubusercontent.com/salesforce/ctrl/master/ctrl-vocab.json from cache at C:\Users\User\.cache\torch\transformers\a858ad854d3847b02da3aac63555142de6a05f2a26d928bb49e881970514e186.285c96a541cf6719677cfb634929022b56b76a0c9a540186ba3d8bbdf02bca42 > I1111 20:26:06.464703 21676 tokenization_utils.py:374] loading file https://raw.githubusercontent.com/salesforce/ctrl/master/ctrl-merges.txt from cache at C:\Users\User\.cache\torch\transformers\aa2c569e6648690484ade28535a8157aa415f15202e84a62e82cc36ea0c20fa9.26153bf569b71aaf15ae54be4c1b9254dbeff58ca6fc3e29468c4eed078ac142 > I1111 20:26:07.624670 21676 configuration_utils.py:152] loading configuration file https://storage.googleapis.com/sf-ctrl/pytorch/ctrl-config.json from cache at C:\Users\User\.cache\torch\transformers\d6492ca334c2a4e079f43df30956acf935134081b2b3844dc97457be69b623d0.1ebc47eb44e70492e0c20494a084f108332d20fea7fe5ad408ef5e7a8f2baef4 > I1111 20:26:07.628657 21676 configuration_utils.py:169] Model config { > "attn_pdrop": 0.1, > "dff": 8192, > "embd_pdrop": 0.1, > "finetuning_task": null, > "from_tf": false, > "initializer_range": 0.02, > "is_decoder": false, > "layer_norm_epsilon": 1e-06, > "n_ctx": 512, > "n_embd": 1280, > "n_head": 16, > "n_layer": 48, > "n_positions": 50000, > "num_labels": 1, > "output_attentions": false, > "output_hidden_states": false, > "output_past": true, > "pruned_heads": {}, > "resid_pdrop": 0.1, > "summary_activation": null, > "summary_first_dropout": 0.1, > "summary_proj_to_labels": true, > "summary_type": "cls_index", > "summary_use_proj": true, > "torchscript": false, > "use_bfloat16": false, > "vocab_size": 246534 > } > > I1111 20:26:08.047282 21676 modeling_utils.py:383] loading weights file https://storage.googleapis.com/sf-ctrl/pytorch/seqlen256_v1.bin from cache at C:\Users\User\.cache\torch\transformers\c146cc96724f27295a0c3ada1fbb3632074adf87e9aef8269e44c9208787f8c8.b986347cbab65fa276683efbb9c2f7ee22552277bcf6e1f1166557ed0852fdf0 > W1111 20:31:14.310047 21676 tokenization_utils.py:936] This tokenizer does not make use of special tokens. Input is returned with no modification. > W1111 20:31:14.650039 21676 tokenization_utils.py:936] This tokenizer does not make use of special tokens. Input is returned with no modification. > W1111 20:31:14.653039 21676 tokenization_utils.py:923] This tokenizer does not make use of special tokens. > ```<|||||>Is there any way to get this working correctly under Pytorch 1.2? I have some work I need to continue with in that version. It doesn't matter too much as I wasn't intending to use that model, but I guess it might matter to someone else...<|||||>Thanks very much for the Colab notebook - yes it works as expected there. I am now having an issue with running the last part of the Quick Tour, but I'll post that separately, and close this one!<|||||>I'm very honest with you: I don't have the exact answer to your notable question. I'm aware of the fact that PyTorch v1.3.0 has got many changes and improvements from PyTorch v1.2, as stated [here](url). Some changes have been made on the `torch.load()` method too. > Is there any way to get this working correctly under Pytorch 1.2? I have some work I need to continue with in that version. It doesn't matter too much as I wasn't intending to use that model, but I guess it might matter to someone else...<|||||>@ChrisPalmerNZ Would you care to re-open this issue? There really should not be a bug with the intro sample code.<|||||>OK, re-opened - interesting to see what fixes this for Pytorch 1.2 - but I felt that @TheEdoardo93 was indicating that this will only work under Pytorch >= 1.3.0 <|||||>Hi, you seem to be having problems when loading the CTRLModel in memory. Usually problems like this can occur if behind a firewall or if the file was corrupted. Could you try to load the model outside of the script, for example in a python console? The following lines should work: ```py from transformers import CTRLModel model = CTRLModel.from_pretrained("ctrl", force_download=True) ``` The `force_download` option will re-download the file, this will confirm that the file being corrupted is not an issue. <|||||>Here the results: ``` Python 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import platform >>> platform.platform() 'Linux-4.15.0-66-generic-x86_64-with-debian-buster-sid' >>> platform.python_version() '3.6.9' >>> import transformers >>> transformers.__version__ '2.1.1' >>> import torch >>> torch.__version__ '1.2.0' >>> from transformers import CTRLModel >>> model = CTRLModel.from_pretrained('ctrl', force_download=True) 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 611/611 [00:00<00:00, 259358.34B/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6552025106/6552025106 [09:45<00:00, 11197418.01B/s] >>> CTRLModel <class 'transformers.modeling_ctrl.CTRLModel'> >>> ``` > Hi, you seem to be having problems when loading the CTRLModel in memory. Usually problems like this can occur if behind a firewall or if the file was corrupted. Could you try to load the model outside of the script, for example in a python console? The following lines should work: > > ```python > from transformers import CTRLModel > > model = CTRLModel.from_pretrained("ctrl", force_download=True) > ``` > > The `force_download` option will re-download the file, this will confirm that the file being corrupted is not an issue.
transformers
1,786
closed
a BertForMaskedLM.from_pretrained error
## 🐛 Bug <!-- Important information --> Model I am using (BertForMaskedLM): Language I am using the model on (Chinese): When i want to do a limit vocabulary fine-tune in BerForMaskedLM, the new config/ vocab and model.bin in word_embedding are all been change. for example: `config = BertConfig.from_pretrained('bert-base-chinese')` `config.vocab_size = 2` `model_dict = torch.load(old_model_path)` `model_dict['bert.embeddings.word_embeddings.weight'] = model_dict['bert.embeddings.word_embeddings.weight'][:2]` `torch.save(model_dict, new_model_path)` `model = BertForMaskedLM.from_pretrained(new_model_path, config=config)` But when i use the from_pretrained code for the model, an errors has appeared: ` Error(s) in loading state_dict for BertForMaskedLM: size mismatch for cls.predictions.bias: copying a param with shape torch.Size([21128]) from checkpoint, the shape in current model is torch.Size([2]). size mismatch for cls.predictions.decoder.weight: copying a param with shape torch.Size([21128, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).` And i have try to use the BertModel.from_pretrained, the error does not appear, and use the code to check the data shape. `torch.load(new_model_path)['bert.embeddings.word_embeddings.weight'].shape ` it appear torch.Size([2, 768])
11-11-2019 02:44:37
11-11-2019 02:44:37
Masked language modeling is an example of autoencoding language modeling , typically mask one or more of words in a sentence and have the model predict those masked words given the other words in sentence. When you changed the vocab size, the number of word to predic from the model also changed. This unofficial methods leaved the output embedding and prediction bias unchanged while changed the vocab size, and caused these errors . I think the output embeddings should be copied from input embeddings, but the bias needs you to tune-up manually. ------ For now, these two lines may make you avoid the error ``` model_dict['cls.predictions.bias'] = model_dict['cls.predictions.bias'][:2] model_dict['cls.predictions.decoder.weight'] = model_dict['cls.predictions.decoder.weight'][:2] ```<|||||>thank!
transformers
1,785
closed
"Write with Transformer" source code?
## ❓ Questions & Help Hello, can't find source code of it. May you help, please? <!-- A clear and concise description of the question. -->
11-10-2019 23:55:08
11-10-2019 23:55:08
Hello, Sorry, the web app is not open source right now.<|||||>Thanks @julien-c . Can you comment on how elastic inference with pytorch models is done?<|||||>Is there any chance this could be reconsidered? I'd love to use it for experimenting with custom fine-tuned models.<|||||>Would also love for this for experimenting with custom models. <|||||>It'd be very interesting to open source this. Is there any intention of doing so in the future?<|||||>Is there any possibility of this becoming reconsidered? I've been trying to look for this as well... :/<|||||>Any chance of this being public? @thomwolf <|||||>> Would also love for this for experimenting with custom models. Same!<|||||>the microsoft/fastformers repo says "Write With Transformer, built by the Hugging Face team at transformer.huggingface.co, is the official demo of this repo’s text generation capabilities" according to this [readme](https://github.com/microsoft/fastformers/blob/main/README_transformers.md#online-demo). Last commit October 2020
transformers
1,784
closed
Unclear documentation for special_tokens_mask
According to [the docs](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.get_special_tokens_mask), the `special_tokens_mask` returned by e.g. `encode_plus` should have > 0 for a special token, 1 for a sequence token Yet when I try ```python tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased') tokenizer.encode_plus("This is a test sentence", add_special_tokens=True) ``` I get ```python {'special_tokens_mask': [1, 0, 0, 0, 0, 0, 1], 'input_ids': [101, 2023, 2003, 1037, 3231, 6251, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0]} ``` Which — to me — would suggest that it is in fact 1 for special tokens and 0 for sequence tokens. The implementations for [BERT](https://github.com/huggingface/transformers/blob/700331b5ece63381ad1b775fc8661cf3ae4493fd/transformers/tokenization_bert.py#L210) and [RoBERTa](https://github.com/huggingface/transformers/blob/1c542df7e554a2014051dd09becf60f157fed524/transformers/tokenization_roberta.py#L110) support this, so should the documentation be made clearer or is it just me who understood it backward?
11-10-2019 22:26:55
11-10-2019 22:26:55
Indeed, this is a documentation error! Thank you for letting us know!<|||||>@LysandreJik This was reintroduced in https://github.com/huggingface/transformers/pull/2989 apparently<|||||>Thanks @Evpok for letting me know, this flew under my radar.<|||||>My pleasure 😄
transformers
1,783
closed
How to measure similarity of words?
I want to know it BERT based contextual embedding can be used to measure similarity of identical words in different contexts. And, can I estimate a threshold for that.
11-10-2019 10:28:08
11-10-2019 10:28:08
You should check out BLEU and ROUGE scores. These are often referred to as the precision and recall of NLP. You may be able to craft a pseudo-f1 score out of these as well `f1 = 2*(bleu*rouge)/(bleu + rouge)` I haven't tried this myself yet, but hopefully this helps! [https://en.wikipedia.org/wiki/BLEU](https://en.wikipedia.org/wiki/BLEU) [https://en.wikipedia.org/wiki/ROUGE_(metric)](https://en.wikipedia.org/wiki/ROUGE_(metric))<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,782
closed
model = GPT2LMHeadModel.from_pretrained(args.model_path) try loads in json format
## 🐛 Bug Initial a model in this code: ``` logging.info('loading model from: {}'.format(args.model_path)) model = GPT2LMHeadModel.from_pretrained(args.model_path) ``` It seems inside transforms reading our model in json format which cause error: ``` INFO 11-10 16:18:49 generate.py:167 - loading model from: ./weights/ProseInChinese/pytorch_model.bin I1110 16:18:49.226168 139982371047232 configuration_utils.py:148] loading configuration file ./weights/ProseInChinese/pytorch_model.bin Traceback (most recent call last): File "generate.py", line 225, in <module> main() File "generate.py", line 168, in main model = GPT2LMHeadModel.from_pretrained(args.model_path) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 287, in from_pretrained **kwargs File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 154, in from_pretrained config = cls.from_json_file(resolved_config_file) File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 186, in from_json_file text = reader.read() File "/usr/lib/python3.6/codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte ```
11-10-2019 08:19:58
11-10-2019 08:19:58
What string was passed under `args.model_path`? Is it a directory or a file name? (It should be a directory)<|||||>model_path is a model trained with xxx.pth format. Is that a directory? contains what?<|||||>The `model_path` should link to a directory holding the pytorch model and the configuration file associated with it, as if it was saved with `model.save_pretrained(directory)`. What is your `xxx.pth` format, how did you obtain such a file?<|||||>Oh, no, I just got others trained xx.pth model, no config file. Why a pytorch model need additional config.json? And people really don't know what's the format inside that file at all..... no convinient<|||||>The Pytorch models require configuration files because the architectures in the library can have very different sizes. For example GPT-2 small, medium, large and XL all share the same model but have different sizes. Configuration files are essential for this purpose. The format is defined by the `save_pretrained` method on the models. Models acquired from other sources than our library may not be loadable, or may need to be converted using one of our conversion scripts. Please read the [quickstart portion of the documentation](https://huggingface.co/transformers/quickstart.html) to get a deeper understanding of the way the library works.
transformers
1,781
closed
Dose the file /examples/run_lm_finetuning.py provide a demo to pre-train a BERT
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi, I'm curious that if the file run_lm_finetuning.py can pre-train a BERT. But if so, why the file named finetuning instead of pre-training? Thanks!
11-10-2019 07:34:47
11-10-2019 07:34:47
Hi, BERT uses the NSP task as well as the MLM task during its pre-training phase. The `run_lm_finetuning.py` script only does MLM so you would need to modify it to pre-train BERT the same way it was done in the paper.
transformers
1,780
closed
Problems when restoring the pretrain weights for TFbert
## 🐛 Bug <!-- Important information --> Model I am using Bert: Language I am using the model on (English, Chinese....): The problem arise when using: * [ ] the official example scripts: (give details) * [ ] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1.when I try to reload the weights for TFbert by using: model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased') 2. I got the error ValueError: Expected floating point type, got <dtype: 'int32'>. 3. Does anybody know how to fix this? ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: * Python version: * PyTorch version: * PyTorch Transformers version (or branch): * Using GPU ? * Distributed of parallel setup ? * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
11-10-2019 05:23:18
11-10-2019 05:23:18
Hi, could you provide information about your setup? Which TensorFlow version are you using, which python version, which Transformers version? Thank you.<|||||>> Hi, could you provide information about your setup? Which TensorFlow version are you using, which python version, which Transformers version? Thank you. Hi, the information about my setup: Python version: 3.6.9 Tensorflow Version: 2.0.0b1 PyTorch Transformers version (or branch):2.1.1 Thank you<|||||>I can't reproduce this on my side while on the same versions. Would you happen to have a folder called "bert-base-uncased" in the same directory from which you're calling the python script?<|||||>Hi, I have exactly the same error while doing the same task... Python version: Python 3.5.3 Tensorflow Version: 2.0.0-beta1 PyTorch Transformers version : transformers (2.1.1) <|||||>In my environment, **it works as expected**! @YumingLiu1996 @LeonardoGracioS N.B: It works without `force_download=True` parameter too. ``` >>> from transformers import TFBertForSequenceClassification 2019-11-12 14:38:24.462663: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-11-12 14:38:24.466451: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3408000000 Hz 2019-11-12 14:38:24.466942: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x56248b3c66f0 executing computations on platform Host. Devices: 2019-11-12 14:38:24.466956: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version >>> model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased', force_download=True) 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 313/313 [00:00<00:00, 154303.85B/s] 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 536063208/536063208 [00:48<00:00, 11140340.62B/s] >>> import platform >>> platform.platform() 'Linux-4.15.0-66-generic-x86_64-with-debian-buster-sid' >>> platform.python_version() '3.6.9' >>> import torch >>> torch.__version__ '1.0.1' >>> import tensorflow as tf >>> tf.__version__ '2.0.0' >>> import transformers >>> transformers.__version__ '2.1.1' ```<|||||>> I can't reproduce this on my side while on the same versions. Would you happen to have a folder called "bert-base-uncased" in the same directory from which you're calling the python script? > I can't reproduce this on my side while on the same versions. Would you happen to have a folder called "bert-base-uncased" in the same directory from which you're calling the python script? No, I don't have such as folder.. <|||||>> I can't reproduce this on my side while on the same versions. Would you happen to have a folder called "bert-base-uncased" in the same directory from which you're calling the python script? I redownload tensorflow 2.0.0 and transformers. And now I encounter another problem: ``` h5py/_objects.pyx in h5py._objects.with_phil.wrapper() h5py/_objects.pyx in h5py._objects.with_phil.wrapper() h5py/h5f.pyx in h5py.h5f.open() OSError: Unable to open file (truncated file: eof = 485070229, sblock->base_addr = 0, stored_eof = 497933648) ``` Do you know the solution for this problem? <|||||>@YumingLiu1996 the error `OSError: Unable to open file` tells you that the file was not downloaded in its entirety. Please add the `force_download=True` parameter when downloading the pre-trained model: ```py model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased', force_download=True) ```<|||||>I solved the issue by using a Google Cloud Platform virtual machine with a TF 2.0 environment.<|||||>> I solved the issue by using a Google Cloud Platform virtual machine with a TF 2.0 environment. I am using a Google Cloud Platform with an K80 and I have the TF2 installed but it doesn't works... the same problem here guys. I don't know why, but it only works on Google Colab until this comment. <|||||>> > I solved the issue by using a Google Cloud Platform virtual machine with a TF 2.0 environment. > > I am using a Google Cloud Platform with an K80 and I have the TF2 installed but it doesn't works... > the same problem here guys. > > I don't know why, but it only works on Google Colab until this comment. Do you have exactly the same error ? Did you start your virtual machine with a TF 2.0 environment and not just a "pip install tensorflow=2.0.0" ?<|||||>I did through the pip install, because I have this machine running some other things. So, the problem is the same but I think could be the Tensorflow version. I will try to build TF 2 from source. Em sex, 15 de nov de 2019 06:07, Samuel Leonardo Gracio < [email protected]> escreveu: > I solved the issue by using a Google Cloud Platform virtual machine with a > TF 2.0 environment. > > I am using a Google Cloud Platform with an K80 and I have the TF2 > installed but it doesn't works... > the same problem here guys. > > I don't know why, but it only works on Google Colab until this comment. > > Do you have exactly the same error ? Did you start your virtual machine > with a TF 2.0 environment and not just a "pip install tensorflow=2.0.0" ? > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/1780?email_source=notifications&email_token=AB4RK3ZWJWXUQDPECFSVTPLQTZRGXA5CNFSM4JLK64N2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEEZC4Q#issuecomment-554275186>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AB4RK32QNPOWOL3OVKAP2FTQTZRGXANCNFSM4JLK64NQ> > . > <|||||>same problem here: OS: Python version: 3.6.8 PyTorch version: 1.2.0 TF version: 2.0.0b1 PyTorch Transformers version (or branch): master (Nov 25, 2019) model = TFBertForSequenceClassification.from_pretrained('bert-base-cased') ValueError: Expected floating point type, got <dtype: 'int32'>. looks like a tensorflow problem <|||||>I've found your problem! It's a TensorFlow bug in version 2.0.0beta1. If you uninstall this version with `pip uninstall tensorflow` and after that you install the TensorFlow with version 2.0.0a0 with `pip install tensorflow==2.0.0,` **the code works as expected**! I've tried different settings and it works as expected! - PyTorch 1.2.0, TensorFlow(CPU-version) 2.0.0 - PyTorch 1.2.0, TensorFlow(GPU-version) 2.0.0 - PyTorch 1.3.1, TensorFlow(CPU-version) 2.0.0 - PyTorch 1.3.1, TensorFlow(GPU-version) 2.0.0 > same problem here: > > ``` > OS: > Python version: 3.6.8 > PyTorch version: 1.2.0 > TF version: 2.0.0b1 > PyTorch Transformers version (or branch): master (Nov 25, 2019) > ``` > > model = TFBertForSequenceClassification.from_pretrained('bert-base-cased') > > ValueError: Expected floating point type, got <dtype: 'int32'>. > > looks like a tensorflow problem<|||||>thanks. I tryied with a0 but it has even more problem, like 'tensorflow.python.keras.api._v2.keras.layers' has no attribute 'LayerNormalization' which are said to be bugs in a0 and patched in b1. However b1 does not work because ValueError: Expected floating point type, got <dtype: 'int32'>. Looks like this is a deadlock. > I've found your problem! It's a TensorFlow bug in version 2.0.0beta1. If you uninstall this version with `pip uninstall tensorflow` and after that you install the TensorFlow with version 2.0.0a0 with `pip install tensorflow==2.0.0,` **the code works as expected**! > > I've tried different settings and it works as expected! > > * PyTorch 1.2.0, TensorFlow(CPU-version) 2.0.0 > > * PyTorch 1.2.0, TensorFlow(GPU-version) 2.0.0 > > * PyTorch 1.3.1, TensorFlow(CPU-version) 2.0.0 > > * PyTorch 1.3.1, TensorFlow(GPU-version) 2.0.0 > > > > same problem here: > > ``` > > OS: > > Python version: 3.6.8 > > PyTorch version: 1.2.0 > > TF version: 2.0.0b1 > > PyTorch Transformers version (or branch): master (Nov 25, 2019) > > ``` > > > > > > model = TFBertForSequenceClassification.from_pretrained('bert-base-cased') > > ValueError: Expected floating point type, got <dtype: 'int32'>. > > looks like a tensorflow problem <|||||>I don't know what to say. I've just tested with Transformers v-2.2.0 and it works as expected. Sorry, but in my environment it works as I've said some days ago. How can i help you? How can i replicate your problem? Which OS are you using? > thanks. I tryied with a0 but it has even more problem, like > > 'tensorflow.python.keras.api._v2.keras.layers' has no attribute 'LayerNormalization' > > which are said to be bugs in a0 and patched in b1. > > However b1 does not work because > ValueError: Expected floating point type, got <dtype: 'int32'>. > > Looks like this is a deadlock. > > > I've found your problem! It's a TensorFlow bug in version 2.0.0beta1. If you uninstall this version with `pip uninstall tensorflow` and after that you install the TensorFlow with version 2.0.0a0 with `pip install tensorflow==2.0.0,` **the code works as expected**! > > I've tried different settings and it works as expected! > > ``` > > * PyTorch 1.2.0, TensorFlow(CPU-version) 2.0.0 > > > > * PyTorch 1.2.0, TensorFlow(GPU-version) 2.0.0 > > > > * PyTorch 1.3.1, TensorFlow(CPU-version) 2.0.0 > > > > * PyTorch 1.3.1, TensorFlow(GPU-version) 2.0.0 > > ``` > > > > > > > same problem here: > > > ``` > > > OS: > > > Python version: 3.6.8 > > > PyTorch version: 1.2.0 > > > TF version: 2.0.0b1 > > > PyTorch Transformers version (or branch): master (Nov 25, 2019) > > > ``` > > > > > > > > > model = TFBertForSequenceClassification.from_pretrained('bert-base-cased') > > > ValueError: Expected floating point type, got <dtype: 'int32'>. > > > looks like a tensorflow problem<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,779
closed
Multi GPU dataparallel crash
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): GPT2 Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: run_lm_finetuning * [ ] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: I am finetuning the GPT2 model with a dataset that I have used in the past to fine-tune the BERT model among others ## To Reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> raceback (most recent call last):████████████████████████████████████████████████████████████████████████████████████████████████████████████▉| 7587/7588 [1:42:01<00:00, 1.26it/s] File "run_lm_finetuning.py", line 551, in <module> main() File "run_lm_finetuning.py", line 503, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_lm_finetuning.py", line 228, in train outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward return self.gather(outputs, self.output_device) File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 165, in gather return gather(outputs, output_device, dim=self.dim) File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map return Gather.apply(target_device, dim, *outputs) File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 68, in forward return comm.gather(inputs, ctx.dim, ctx.target_device) File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/cuda/comm.py", line 165, in gather return torch._C._gather(tensors, dim, destination) **RuntimeError: Gather got an input of invalid size: got [2, 2, 12, 1024, 64], but expected [2, 3, 12, 1024, 64] (gather at /opt/conda/conda-**bld/pytorch_1565272279342/work/torch/csrc/cuda/comm.cpp:226) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7f47a13d5e37 in /home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: torch::cuda::gather(c10::ArrayRef<at::Tensor>, long, c10::optional<int>) + 0x3c7 (0x7f4720c61327 in /home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #2: <unknown function> + 0x5fa742 (0x7f47a420f742 in /home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #3: <unknown function> + 0x1c8316 (0x7f47a3ddd316 in /home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch_python.so) <omitting python frames> frame #14: THPFunction_apply(_object*, _object*) + 0x98f (0x7f47a40024bf in /home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch_python.so) ## Expected behavior it crashes at the last batch all the time. I expect it to move to the next epoch ## Environment * OS: * Python version: 3.6 * PyTorch version: 1.2 * PyTorch Transformers version (or branch): * Using GPU ? 4 Quadro 8000 * Distributed of parallel setup ? parallel * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
11-10-2019 05:10:57
11-10-2019 05:10:57
Even I'm facing this issue @LysandreJik @thomwolf can you throw some input on this? all the input is of the same length still this issue occurs. @devroy73 meanwhile in data loader you can set drop_last = True<|||||>Hey @anandhperumal thanks for that it solved my crashing issue. <|||||>I tried setting drop_last=True, but it did not fix the issue for me.<|||||>maybe we should add parameter to Trainer. or it should added to the doc for others
transformers
1,778
closed
from_pretrained: convert DialoGPT format
DialoGPT checkpoints have "lm_head.decoder.weight" instead of "lm_head.weight". (see: https://www.reddit.com/r/MachineLearning/comments/dt5woy/p_dialogpt_state_of_the_art_conversational_model/f6vmwuy?utm_source=share&utm_medium=web2x)
11-09-2019 16:32:54
11-09-2019 16:32:54
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=h1) Report > Merging [#1778](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7a9aae1044aa4699310a8004f631fc0a4bdf1b65?src=pr&el=desc) will **increase** coverage by `0.06%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1778/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1778 +/- ## ========================================== + Coverage 84.03% 84.09% +0.06% ========================================== Files 94 94 Lines 14032 14036 +4 ========================================== + Hits 11792 11804 +12 + Misses 2240 2232 -8 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1778/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `95.13% <100%> (+2.18%)` | :arrow_up: | | [transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1778/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `91.05% <100%> (+1.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=footer). Last update [7a9aae1...90f6e73](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Looks good to me, thanks @eukaryote31 !<|||||>also cc @sshleifer <|||||>LGTM!<|||||>Ok great, merging!<|||||>This was incorrect: we have multiple models that contain a valid `lm_head.decoder` layer (RoBERTa and its variants, CamemBERT, XLM-R...) and this PR broke `from_pretrained` for those models in the `ForMaskedLM` case. The way to go in the particular `DialoGPT` case would be to either: - create a conversion script and upload the GPT2 compatible weights somewhere. - or, add a special test in from_pretrained that the current instance is an instance of `GPT2...Head` (this is not pretty in terms of OOP encapsulation 💩) - or, expose some kind of overrideable list of key mapping (that can be implemented by the GPT2 subclass) (1. seems like the easier solution) cc @LysandreJik @sshleifer @thomwolf <|||||>Also cc @patrickvonplaten 🤯
transformers
1,777
closed
Could you support albert?
## ❓ Questions & Help For many students,who haven't large GPU,so they will use small model(eg albert),hence I hope you will support loading albert, Tkanks so much!
11-09-2019 14:38:21
11-09-2019 14:38:21
Duplicate of #1649 #1420 #1522 #1564 🤣🤣
transformers
1,776
closed
Extracting the output layer of HuggingFace GPT2DoubleHeadsModel
Hello, Suppose that I have two GPT2DoubleHeadsModel (let’s call it model A and B). Is there any way that I can: 1. take the hidden state of a given input at the n-th layer of the model A and feed it directly into the output layer of model B to compute output AND 2. Take the output obtained from 1. and calculate the loss based on an appropriate label Some coding example would be a great help. Thank you,
11-09-2019 10:13:44
11-09-2019 10:13:44
Please do not open duplicate issues (#1774)
transformers
1,775
closed
pip install transformers not downloading gpt2-xl
Following the release of 1.5 B parameter model, I attempted to upgrade my version using the following command: pip install transformers --upgrade The installed library does not have gpt2-xl, and throws this error when I try calling it: OSError: Model name 'gpt2-xl' was not found in model name list (gpt2, gpt2-medium, gpt2-large, distilgpt2). We assumed 'gpt2-xl' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url. Please update the pip installer to allow the easy installation of the latest model
11-08-2019 18:33:42
11-08-2019 18:33:42
We haven't updated the pip version yet, we'll do so in the following weeks. Please install it from source in the meantime: ``` pip install git+https://github.com/huggingface/transformers ```<|||||>Traceback (most recent call last): File "run_pplm.py", line 936, in <module> run_pplm_example(**vars(args)) File "run_pplm.py", line 738, in run_pplm_example tokenizer = GPT2Tokenizer.from_pretrained(pretrained_model) File "/home/xps/anaconda3/envs/nlg_entity/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 911, in from_pretrained return cls._from_pr etrained(*inputs, **kwargs) File "/home/xps/anaconda3/envs/nlg_entity/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1014, in _from_pretraine list(cls.vocab_files_names.values()), OSError: Model name 'gpt2-medium' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We asswas a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find sucat this path or url. I also reported the same error....@LysandreJik <|||||>> 回溯(最近一次调用): > 文件“run_pplm.py”,第 936 行,在 > run_pplm_example(**vars(args)) > 文件“run_pplm.py”,第 738 行,在 run_pplm_example > tokenizer = GPT2Tokenizer.from_pretrained(pretrained_model) > 文件“ /home/xps/anaconda3/envs/nlg_entity/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 911, in from_pretrained > return > cls._from_pretrained(*inputs, **kwargs) > File "/home /xps/anaconda3/envs/nlg_entity/lib/python3.7/site-packages/transformers/tokenization_utils.py”,第 1014 行,在 _from_pretraine > 列表(cls.vocab_files_names.values())中, > OSError:在标记器模型名称列表(gpt2、gpt2-medium、gpt2-large、gpt2-xl、distlgpt2)中找不到模型名称“gpt2-medium”。我们为包含名为 ['vocab.json', 'merges.txt'] 的词汇文件的目录添加了路径、模型标识符或 url,但找不到此路径或 url。 > > 我也报了同样的错误.... @LysandreJik hi,you can run this model?
transformers
1,774
closed
For HuggingFace GPT2DoubleHeadsModel, is there a way to directly provide a hidden state for an input?
Hello, Say I have two custom trained HuggingFace GPT2DoubleHeadsModels (Model 1 and 2). I want to take an hidden state of m-th layer of model 1, and use that hidden state as my input for model 2. Is this possible with HuggingFace GPT2DoubleHeadsModels? Some coding example would be a great help! Thank you,
11-08-2019 16:22:57
11-08-2019 16:22:57
Hi! You have access to the models' internals so you could definitely rewrite the main loop function to handle such a use-case! You can access the hidden layers as follows: ```py blocks = model.transformer.h # this is a list of "block"s block[0] # contains the MLP/LayerNorm/attention layers ``` I'd recommend taking a look at the `forward` method of the [`GPT2DoubleLMHeadModel`](https://github.com/huggingface/transformers/blob/master/transformers/modeling_gpt2.py#L642) to understand how the `transformer` is managed, and to take a look at the [loop inside the `GPT2Model`](https://github.com/huggingface/transformers/blob/master/transformers/modeling_gpt2.py#L451) which you will probably have to replicate a little to achieve what you want to do.<|||||>Hello, Thank you very much for your reply. So if I am understanding this correctly, the part of the loop inside GPT2 model below is what I am supposed to use to generate output based on a given hidden state, am I correct? ``` outputs = block(hidden_states, layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask[i]) ``` Also, I would like to clarify what ‘block’ actually refers to in this code. Is ‘block’ same as GPT-2 ‘layer’? It’s not very clear to me what `block[1]`, `block[2]`, etc. represents. Thank you again,<|||||>Yes.<|||||>Hello, What exactly is the `block`? Does `block[0]` refer to the first gtp-2 layer, and `block[1]` refers to the second gtp-2 layer and so on? How can I utilize `block` to have access to the uppermost output layer of a gtp-2? Thank you,<|||||>Yes, `block` refers to the GPT-2 layers. `block[0]` is the first layer, ... `block[-1]` is the last layer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,773
closed
[WIP] BertAbs summarization
This PR builds on the encoder-decoder mechanism to do abstractive summarizaton. Contributions: - A BeamSearch class that takes any `PreTranedEncoderDecoder` as an input; - A script `run_summarization.py` that allows to pre-train the model and generate summaries. Note that to save the checkpoints I had to add a parameter in the `save_pretrained_model` method, but I am not sure this is the best solution.
11-08-2019 14:56:19
11-08-2019 14:56:19
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=h1) Report > Merging [#1773](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cb163865a4c761c226b151283309eedb2b1ca4d?src=pr&el=desc) will **increase** coverage by `1.06%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1773/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1773 +/- ## ========================================== + Coverage 82.67% 83.74% +1.06% ========================================== Files 111 108 -3 Lines 16162 15749 -413 ========================================== - Hits 13362 13189 -173 + Misses 2800 2560 -240 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.7% <100%> (+0.02%)` | :arrow_up: | | [transformers/hf\_api.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2hmX2FwaS5weQ==) | `96.87% <0%> (-0.63%)` | :arrow_down: | | [transformers/tests/modeling\_bert\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2JlcnRfdGVzdC5weQ==) | `96.5% <0%> (-0.58%)` | :arrow_down: | | [transformers/tests/modeling\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2NvbW1vbl90ZXN0LnB5) | `74.23% <0%> (-0.31%)` | :arrow_down: | | [transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9vcGVuYWkucHk=) | `81.81% <0%> (-0.3%)` | :arrow_down: | | [transformers/tests/modeling\_openai\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX29wZW5haV90ZXN0LnB5) | `93.2% <0%> (-0.2%)` | :arrow_down: | | [transformers/tests/modeling\_xlnet\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3hsbmV0X3Rlc3QucHk=) | `94.68% <0%> (-0.2%)` | :arrow_down: | | [transformers/tests/modeling\_albert\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2FsYmVydF90ZXN0LnB5) | `95.08% <0%> (-0.16%)` | :arrow_down: | | [transformers/tests/modeling\_gpt2\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2dwdDJfdGVzdC5weQ==) | `94.01% <0%> (-0.15%)` | :arrow_down: | | [transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl94bG0ucHk=) | `83.2% <0%> (-0.14%)` | :arrow_down: | | ... and [40 more](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=footer). Last update [0cb1638...6fb9900](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@rlouf Thanks for the the PR! I'm trying to run run_summarization.py with the following command, but got an error. Any idea why? I have unzipped the cnn and dailymail folders in my data directory. ``` python examples/run_summarization.py \ --data_dir /data/home/hlu/notebooks/summarization/data \ --output_dir ./summarization_output \ --do_train True \ --model_name_or_path bert-base-cased \ --num_train_epochs 1 \ ``` W1108 21:29:27.643532 139837444753152 run_summarization.py:488] Process rank: 0, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False I1108 21:29:27.643716 139837444753152 run_summarization.py:491] Training/evaluation parameters Namespace(data_dir='/data/home/hlu/notebooks/summarization/data', device=device(type='cuda'), do_evaluate=False, do_overwrite_output_dir=False, do_train=True, gradient_accumulation_steps=1, max_grad_norm=1.0, max_steps=-1, model_name_or_path='bert-base-cased', model_type='bert', n_gpu=1, num_train_epochs=1, output_dir='./summarization_output', per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, seed=42, to_cpu=False) I1108 21:30:40.805522 139837444753152 run_summarization.py:198] ***** Running training ***** I1108 21:30:40.805693 139837444753152 run_summarization.py:199] Num examples = 312085 I1108 21:30:40.806245 139837444753152 run_summarization.py:200] Num Epochs = 1 I1108 21:30:40.806311 139837444753152 run_summarization.py:201] Instantaneous batch size per GPU = 4 I1108 21:30:40.806385 139837444753152 run_summarization.py:204] Total train batch size (w. parallel, distributed & accumulation) = 4 I1108 21:30:40.806446 139837444753152 run_summarization.py:207] Gradient Accumulation steps = 1 I1108 21:30:40.806501 139837444753152 run_summarization.py:208] Total optimization steps = 78022 Traceback (most recent call last): File "examples/run_summarization.py", line 545, in <module> main() File "examples/run_summarization.py", line 497, in main global_step, tr_loss = train(args, model, tokenizer) File "examples/run_summarization.py", line 236, in train decoder_lm_labels=lm_labels, File "/data/anaconda/envs/nlp_gpu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/data/home/hlu/notebooks/huggingface/transformers/transformers/modeling_encoder_decoder.py", line 240, in forward decoder_outputs = self.decoder(decoder_input_ids, **kwargs_decoder) File "/data/anaconda/envs/nlp_gpu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/data/home/hlu/notebooks/huggingface/transformers/transformers/modeling_bert.py", line 865, in forward encoder_attention_mask=encoder_attention_mask) File "/data/anaconda/envs/nlp_gpu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/data/home/hlu/notebooks/huggingface/transformers/transformers/modeling_bert.py", line 677, in forward extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :] RuntimeError: expected device cuda:0 and dtype Long but got device cuda:0 and dtype Bool <|||||>Hi! Thank you for raising the issue. I think it comes from the following lines in `modeling_bert.py`: ```python causal_mask = seq_ids[None, None, :].repeat(batch_size, seq_length, 1) <= seq_ids[None, :, None] extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :] ``` `causal_mask` as defined is a boolean tensor, while the `attention_mask` being passed is of type `long`. I am very confused because I never got the error when I tried locally. I will fix the issue asap. In the meantime, would you mind executing the following code in your environment and pasting the result here? ```python import platform; print("Platform", platform.platform()) import sys; print("Python", sys.version) import torch; print("PyTorch", torch.__version__) ```<|||||>@rlouf Thanks! Please see the environment info below. >>> import platform; print("Platform", platform.platform()) Platform Linux-4.15.0-1061-azure-x86_64-with-debian-stretch-sid >>> import sys; print("Python", sys.version) Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) [GCC 7.3.0] >>> import torch; print("Pytorch", torch.__version__) Pytorch 1.2.0<|||||>Thank you! I’m working with PyTorch 1.3.0, and they introduced type promotion in this release (https://github.com/pytorch/pytorch/releases) which surely explains why it worked for me but not for you. I will work on a fix this week, in the meantime you can simply convert causal mask manually to long, or upgrade to pytorch 1.3 if that is possible. Edit: I pushed the changes. Let me know if there still is an issue.<|||||>Here is the latest on summarization: - I added utilities to load the CNN/DailyMail dataset, to separate stories from summaries. The utilities are tested; - I wrapped the BertAbs model along with its configuration to be able to use it with the library's usual API `BertAbs.from_pretrained`. I uploaded the finetuned weights and configuration on S3 and tested that model loads properly; - I also wrapped the sequence generation so we can generate summaries from any text using Beam Search; - I removed my previous attempt at Beam Search and saved it for later; - I added a small fix in `modeling_bert.py` for users that use PyTorch v < 1.30. We convert the causal mask explicitly from `bool` to `long` since type promotion was only introduce in PyTorch 1.3 - I added ROUGE evaluation and am able to reproduce the authors' results on CNN/DailyMail - [ ] revert changes in `modeling_encoderdecoder.py` I squashed my commits and rebased on the repo's master branch. The docs are updated. Good to go on my side.
transformers
1,772
closed
Fix run_squad.py
The `run_squad.py` was looking in the output directory for `config.json` when really it was one level lower in the checkpoint directories.
11-08-2019 14:34:23
11-08-2019 14:34:23
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,771
closed
Error when Fine-tuning XLM on SQuA
## 🐛 Bug <!-- Important information --> Model I am using (XLM): Language I am using the model on (English and Chinese): The problem arise when using: [CUDA_VISIBLE_DEVICES=2 python run_squad.py --model_type xlm --model_name_or_path xlm-mlm-tlm-xnli15-1024 --do_train --do_eval --train_file $SQUAD_DIR/Crosslingual_QZH_train.json --predict_file $SQUAD_DIR/Crosslingual_QZH_valid.json --per_gpu_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 100 --max_seq_length 384 --doc_stride 128 --eval_all_checkpoints --save_steps 50 --evaluate_during_training --output_dir /home/weihua/Sqad/transformers/xlm_out/ ] error: Traceback (most recent call last): File "run_squad.py", line 569, in <module> main() File "run_squad.py", line 515, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_squad.py", line 179, in train results = evaluate(args, model, tokenizer) File "run_squad.py", line 275, in evaluate args.version_2_with_negative, tokenizer, args.verbose_logging) File "/home/weihua/Sqad/transformers/examples/utils_squad.py", line 814, in write_predictions_extended final_text = get_final_text(tok_text, orig_text, tokenizer.do_lower_case, AttributeError: 'XLMTokenizer' object has no attribute 'do_lower_case' ![WeChat Image_20191108182243](https://user-images.githubusercontent.com/43492059/68469281-d0188e00-0254-11ea-928a-8ab4a7140838.png) ![WeChat Image_20191108182248](https://user-images.githubusercontent.com/43492059/68469288-d3ac1500-0254-11ea-9116-9f0dd15b65cd.png) How can I deal with this problem?
11-08-2019 10:26:29
11-08-2019 10:26:29
> ## Bug > Model I am using (XLM): > > Language I am using the model on (English and Chinese): > > The problem arise when using: > > [CUDA_VISIBLE_DEVICES=2 python run_squad.py --model_type xlm --model_name_or_path xlm-mlm-tlm-xnli15-1024 --do_train --do_eval --train_file $SQUAD_DIR/Crosslingual_QZH_train.json --predict_file $SQUAD_DIR/Crosslingual_QZH_valid.json --per_gpu_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 100 --max_seq_length 384 --doc_stride 128 --eval_all_checkpoints --save_steps 50 --evaluate_during_training --output_dir /home/weihua/Sqad/transformers/xlm_out/ ] > > error: > Traceback (most recent call last): > File "run_squad.py", line 569, in > main() > File "run_squad.py", line 515, in main > global_step, tr_loss = train(args, train_dataset, model, tokenizer) > File "run_squad.py", line 179, in train > results = evaluate(args, model, tokenizer) > File "run_squad.py", line 275, in evaluate > args.version_2_with_negative, tokenizer, args.verbose_logging) > File "/home/weihua/Sqad/transformers/examples/utils_squad.py", line 814, in write_predictions_extended > final_text = get_final_text(tok_text, orig_text, tokenizer.do_lower_case, > AttributeError: 'XLMTokenizer' object has no attribute 'do_lower_case' > > ![WeChat Image_20191108182243](https://user-images.githubusercontent.com/43492059/68469281-d0188e00-0254-11ea-928a-8ab4a7140838.png) > ![WeChat Image_20191108182248](https://user-images.githubusercontent.com/43492059/68469288-d3ac1500-0254-11ea-9116-9f0dd15b65cd.png) > > How can I deal with this problem? Look [here](https://github.com/huggingface/transformers/blob/master/transformers/tokenization_xlm.py). This is the source code of the XLM tokenizer. In the ___init__()_ method, you can see that the _do_lower_case_ parameter doesn't exist, but it exists the **do_lowercase_and_remove_accent** parameter. Said this, I tell you a solution to your problem (a wordaround). If you see the source code of [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py) at rows 487-489, you see that here the tokenizer (different according to the model chosen by you - in your case XLM) is loaded, passing the _do_lower_case_ parameter. But as I said before, the tokenizer for XLM model accepts a different parameter. So, you can insert an if/else statement (in pseudo-code): ``` if args.model_type == 'XLM': tokenizer = tokenizer_class.from_pretrained(..., do_lower_case_and_remove_accent=args.do_lower_case, ...) else: tokenizer = tokenizer_class.from_pretrained(..., do_lower_case=args.do_lower_case, ...) ``` N.B. if you want to be more comfortable of my solution to your problem, please see the source code of the `tokenization_xlm.py` in your HuggingFace version whether it contains the _do_lowercase_and_remove_accent_ parameter and not the _do_lowercase_ parameter. Keep us updated on your problem! Good luck<|||||>Actually, I just replace the line 814 in utils_squad.py with 'final_text = get_final_text(tok_text, orig_text,verbose_logging)'. It means I no longer pass ‘tokenizer.do_lower_case’ to this function. Because I find that in the tokenization_xlm.py, the parameter 'do_lower_case_and_remove_accent' is already set. Although this allows my program to run, it does not give good results. So I suspect that my change is wrong. I also tried to follow your suggestion above. But the results did not improve. This is the result of the 700th step. ![WeChat Image_20191108204610](https://user-images.githubusercontent.com/43492059/68477603-d6b10080-0268-11ea-947f-995b157da8d3.png) The exact and f1 is very low. I used bert-base-multilingual-cased to do SQuAD with the same data set before. The exact and f1 increased very fast. At around 200th step, the f1 can reach 40+, and exact is about 30. Is it wrong with my parameter settings that caused these two scores to rise slowly when I use XLM? Or is it necessary to take longer to finetune the SQuAD task in XLM than in BERT? This is my first time to do the SQuAD. I sincerely hope that you can give me some advice. Thank you.<|||||>I also tried to train and test on the official dataset, that is ![WeChat Image_20191109175459](https://user-images.githubusercontent.com/43492059/68526753-13890000-031a-11ea-9f38-e9841bb0ac42.png) in the examples document. but the situation was the same as what I got on my own dataset. At the same time, I find that when I use the XLM model with learning-rate= 3e-5, the loss decreases very slowly. Is this due to the inappropriateness of my parameter settings? Or is it because the loss function of the model needs to be changed?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,770
closed
Only init encoder_attention_mask if stack is decoder
We currently initialize `encoder_attention_mask` when it is `None`, whether the stack is that of an encoder or a decoder. Since this may lead to bugs that are difficult to tracks down later, I added a condition that assesses whether the current stack is a decoder.
11-08-2019 10:23:10
11-08-2019 10:23:10
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=h1) Report > Merging [#1770](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1c542df7e554a2014051dd09becf60f157fed524?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `90%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1770/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1770 +/- ## ========================================== + Coverage 84.03% 84.03% +<.01% ========================================== Files 94 94 Lines 14032 14034 +2 ========================================== + Hits 11792 11794 +2 Misses 2240 2240 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1770/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.82% <90%> (+0.04%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=footer). Last update [1c542df...cd286c2](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ok great @rlouf!<|||||>I wonder why encoder_attention_mask use decoder inputs shape when it's not specified. `encoder_attention_mask = torch.ones(input_shape, device=device)`<|||||>Indeed, wanna have a look @rlouf?<|||||>That's a very good point, thanks @efeiefei ! See #2107
transformers
1,769
closed
[XLM-R] by Facebook AI Research
# 🌟New model addition ## Model description Yesterday, Facebook has released _open source_ its new NLG model called **XLM-R** (XLM-RoBERTa) on [arXiv](https://arxiv.org/abs/1911.02116). This model uses self-supervised training techniques to achieve state-of-the-art performance in **cross-lingual understanding**, a task in which a model is trained in one language and then used with other languages without additional training data. Our model improves upon previous multilingual approaches by incorporating **more training data and languages** — including so-called low-resource languages, which lack extensive labeled and unlabeled data sets. ## Open Source status * [ ] the model implementation is available: [here](https://github.com/pytorch/fairseq/blob/master/fairseq/models/roberta/model.py) under the **XLMRModel** Python class (row 198) * [ ] the model weights are available: **Yes**, [here](https://github.com/pytorch/fairseq/blob/master/examples/xlmr/README.md) more details * [ ] who are the authors: **FacebookAI Research** (Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov) ## Additional context Facebook says these two sentences about this new model in their [blog](https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/?__xts__[0]=68.ARAEwOOYcfFnZxSeyb88a9qwrCTeTF8aO869NMP3lKL7Xp6aYnOGmS1j9jO6wYt89qDPAiHkpVboO3nuo_GhLR2GM8uVTx2q8m3HFehnIkQjn96QtyD33FxpHipTbT4hC1bohM6Y8tNQirCLKZ28VTRiHBbEpgEP1cr9NuwRUkyVfS0KcQpXv0GFLRFQt4MJvU-nnmrMbBN56MXQ7pi3IGHLVmanXhJbgQFJu7Utq80xa8EGVR3b5SqAZgN6yL7jVwKkcYpnHVdgdJJ6dLdXRm46xK03XbxxrL8ghktmSxXyzhykfPEo_akj0u06syf3HYBsZReDdF178xiEZIgn_2VXIA&__tn__=-UK-R): > XLM-R represents an important step toward our vision of providing the best possible experience on our platforms for everyone, regardless of what language they speak > We hope to improve the performance of multilingual models created by the research community, particularly systems that use self-supervised training methods to better understand low-resource languages. > XLM-R has been trained on **2.5T of data** across **100 languages** data filtered from Common Crawl
11-08-2019 08:07:35
11-08-2019 08:07:35
cc @aconneau 😬<|||||>is there any update in the XLM-R model? <|||||>Let me know if you need some-help in porting the xlm-r models to HF.<|||||>I think that's maybe not the correct way, but I adjusted the `convert_roberta_original_pytorch_checkpoint_to_pytorch.py` script to convert the `fairseq` model into a `transformers` compatible model file. I used the `sentencepiece` BPE loader and adjusted the vocab size. Then I used the `CamemBERT` model class to perform some evaluations on NER. But the result is not really good (I tried to replicate the CoNLL-2003 for English). So I guess it is not as simple as this first attempt 😅 --- Gist for the conversion script is [here](https://gist.github.com/stefan-it/1eaf537a853aee6ab8fe5a425bf09ed7). The `CamemBERT` model configuration looks pretty much the same as XLM-R large?!<|||||>> I think that's maybe not the correct way, but I adjusted the `convert_roberta_original_pytorch_checkpoint_to_pytorch.py` script to convert the `fairseq` model into a `transformers` compatible model file. I used the `sentencepiece` BPE loader and adjusted the vocab size. > > Then I used the `CamemBERT` model class to perform some evaluations on NER. But the result is not really good (I tried to replicate the CoNLL-2003 for English). > > So I guess it is not as simple as this first attempt 😅 > > Gist for the conversion script is [here](https://gist.github.com/stefan-it/1eaf537a853aee6ab8fe5a425bf09ed7). > > The `CamemBERT` model configuration looks pretty much the same as XLM-R large?! Hi @stefan-it, do you have any update for your attempt?<|||||>The final models have been released today 😍 https://github.com/pytorch/fairseq/tree/master/examples/xlmr So I'm going to try the conversion with these models tomorrow/in the next days :)<|||||>I think the model conversion is done correctly. But: the `CamembertTokenizer` implementation can't be used, because it adds some special tokens. I had to modify the tokenizer to match the output of the `fairseq` tokenization/`.encode()` method :) I'll report back some results on NER later. **update**: I could achieve 90.41% on CoNLL-2003 (English), paper reports 92.74 (using Flair). **update 2**: Using the `run_ner.py` example (incl. some hours of tokenization debugging...): 96.22 (dev) and 91.91 (test).<|||||>Btw I was using the XLM-R v0 checkpoints in a project I'm working on and the v0 checkpoints worked slightly better than the checkpoints added today. Is it possible to also add the older checkpoints?<|||||>I think it's the best solution to offer both checkpoint versions! In my opinion, the _ideal case_ is that, as like to other models in Transformers, you can select which version of XLM-R checkpoints to use, e.g. ``` > from transformers import XLMRModel > base_model = XLMRModel.from_pretrained('xlmr-base') # 250M parameters > large_model = XLMRModel.from_pretrained('xlmr-large') # 560M parameters ``` > Btw I was using the XLM-R v0 checkpoints in a project I'm working on and the v0 checkpoints worked slightly better than the checkpoints added today. Is it possible to also add the older checkpoints?<|||||>Btw using XLM-R I encounter this issue: Batch size affecting output. #2401 This is really annoying and makes it hard to use the model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@ricardorei Did you happen to successfully use the XLM-R model ? I'm trying to see how this model can be used as pretrained step for NMT tasks, I tried raw version from XLM facebook repo and ran into multiple OOM issues. The best suggestion so far I got is to try smaller version of Fairseq xlmr (base) on p3dn.24xlarge instance or the Google TPU Pytorch way. Thanks ! <|||||>@mohammedayub44 I am using the base model which runs well in a 12GB GPU with batch size of 8. Depending on your implementation and task you can run even bigger batches (16, 24 for example). And I am also using the version directly from Fairseq, because you can load the v0 checkpoint. The variability in my prediction with different batch sizes I could never figure out. Probably some floating-point precision issues going on under the hood. It doesn't change overall performance but it is annoying... <|||||>BTW, I am using the TF variant from https://huggingface.co/jplu/tf-xlm-roberta-base and https://huggingface.co/jplu/tf-xlm-roberta-large . I have successfully finetuned even the large model on a 16GB GPU and it was performing substantially better than the base model (on Czech Q&A).<|||||>@ricardorei Thanks for the confirmation. I'm okay with v0 checkpoints, I just need to check if the model can be fine-tuned for NMT. I'm guessing you're fine tuning for Classification tasks. If you could share the prepare and train commands you are using. It would be easier than digging deep into every fairseq hyperparamter. Thanks ! <|||||>@foxik Is TF variant more suitable for fine-tuning. Any particular preprocessing steps you carried out for fine-tuning. If you can share them, I can map the same for NMT task. Thanks ! <|||||>@mohammedayub44 Yes I was using it for classification/regression. In your case, you need the encoder and decoder part which would take a lot more space. I would suggest that you share parameters between you encoder and decoder. I know that, with the right hyperparameter, you can achieve good results by sharing the parameters between your encoder and decoder -> [A Simple and Effective Approach to Automatic Post-Editing with Transfer Learning](https://arxiv.org/pdf/1906.06253.pdf) In terms of hyperparameters that I am using, they are very simple. I freeze the encoder for 1 epoch while fine-tuning the classification head and then I fine-tune the entire model. My classification-head has a learning rate of 0.00003 while XLM-R has 0.00001. The optimizer is a standard Adam. This combination of gradual unfreezing with discriminative learning rates works well in my task.<|||||>@ricardorei Thanks for sharing the paper. Some interesting results there. Any hints on how I can setup both encoder and decoder of XLM-R and share the parameters using HuggingFace library. I could only find [LM fine-tuning](https://github.com/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb) examples and notebook file. Nothing on NMT based fine-tuning.
transformers
1,768
closed
BERT: Uncased vocabulary has 30,552 tokens whereas cased has 28,996 tokens. Why this difference?
## ❓ Questions & Help I was expecting cased vocabulary to be larger than uncased as it needs to include both tokens with capitalizations and without. But, to the contrary, I found the cased vocabulary to be smaller. May I know reason for this?
11-08-2019 07:03:08
11-08-2019 07:03:08
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,767
closed
GPT2 - XL
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): Language I am using the model on (English, Chinese....): The problem arise when using: * [X ] the official example scripts: (give details) * [ ] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. In the documentation, it's given how to load GPT2-XL. But I'm getting an error when I try to load is GPT2 XL available as of now or it would take some time? Error: ``` >>> model = GPT2DoubleHeadsModel.from_pretrained('gpt2-xl') Model name 'gpt2-xl' was not found in model name list (gpt2-large, gpt2, gpt2-medium). We assumed 'gpt2-xl' was a path or url but couldn't find any file associated to this path or url. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py", line 492, in from_pretrained **kwargs File "/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py", line 194, in from_pretrained raise e File "/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py", line 180, in from_pretrained resolved_config_file = cached_path(config_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies) File "/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/file_utils.py", line 124, in cached_path raise EnvironmentError("file {} not found".format(url_or_filename)) OSError: file gpt2-xl not found ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS:Ubuntu * Python version:3.7 * PyTorch version: 1.3.1 * PyTorch Transformers version (or branch): * Using GPU ? yes * Distributed of parallel setup ? no * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
11-08-2019 05:03:41
11-08-2019 05:03:41
> ## Bug > Model I am using (Bert, XLNet....): > > Language I am using the model on (English, Chinese....): > > The problem arise when using: > > * [X ] the official example scripts: (give details) > * [ ] my own modified scripts: (give details) > > The tasks I am working on is: > > * [ ] an official GLUE/SQUaD task: (give the name) > * [x] my own task or dataset: (give details) > > ## To Reproduce > Steps to reproduce the behavior: > > 1. In the documentation, it's given how to load GPT2-XL. But I'm getting an error when I try to load is GPT2 XL available as of now or it would take some time? > > Error: > > ``` > >>> model = GPT2DoubleHeadsModel.from_pretrained('gpt2-xl') > Model name 'gpt2-xl' was not found in model name list (gpt2-large, gpt2, gpt2-medium). We assumed 'gpt2-xl' was a path or url but couldn't find any file associated to this path or url. > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py", line 492, in from_pretrained > **kwargs > File "/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py", line 194, in from_pretrained > raise e > File "/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py", line 180, in from_pretrained > resolved_config_file = cached_path(config_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies) File "/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/file_utils.py", line 124, in cached_path > raise EnvironmentError("file {} not found".format(url_or_filename)) > OSError: file gpt2-xl not found > ``` > > ## Expected behavior > ## Environment > * OS:Ubuntu > * Python version:3.7 > * PyTorch version: 1.3.1 > * PyTorch Transformers version (or branch): > * Using GPU ? yes > * Distributed of parallel setup ? no > * Any other relevant information: > > ## Additional context This "issue" you have opened is the same as the #1747 . It's not a bug, but a **mismatch between version on Pypi and source GitHub**. As stated by @LysandreJik 2 days ago , they haven't released the version with OpenAI GPT-2-xl yet on Pypi. Please, install Huggingface from source with the following command: `pip install git+https://github.com/huggingface/transformers`<|||||>@TheEdoardo93 sure thanks
transformers
1,766
closed
Added additional training utils for run_squad.py
Added different LR schedules and hyperparameters for beta1 and beta2 for AdamW
11-07-2019 22:42:14
11-07-2019 22:42:14
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=h1) Report > Merging [#1766](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1c542df7e554a2014051dd09becf60f157fed524?src=pr&el=desc) will **decrease** coverage by `1.38%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1766/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1766 +/- ## ========================================== - Coverage 84.03% 82.65% -1.39% ========================================== Files 94 94 Lines 14032 14032 ========================================== - Hits 11792 11598 -194 - Misses 2240 2434 +194 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1766/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `9.85% <0%> (-83.1%)` | :arrow_down: | | [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1766/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `79.78% <0%> (-17.03%)` | :arrow_down: | | [transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1766/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `59.41% <0%> (-12.36%)` | :arrow_down: | | [transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1766/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `71.18% <0%> (-2.44%)` | :arrow_down: | | [transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1766/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `94.24% <0%> (-2.22%)` | :arrow_down: | | [transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1766/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `80.66% <0%> (-1.34%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=footer). Last update [1c542df...5f1eca9](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ok with that. Merging. cc @LysandreJik and his refactoring of the `run_squad` example.<|||||>Actually seems like I can't push a merge commit on your branch to fix conflict. Will close the PR later unless you can fix it.
transformers
1,765
closed
Fix run_bertology.py
Make imports and `args.overwrite_cache` match `run_glue.py`.
11-07-2019 22:28:08
11-07-2019 22:28:08
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1765?src=pr&el=h1) Report > Merging [#1765](https://codecov.io/gh/huggingface/transformers/pull/1765?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1c542df7e554a2014051dd09becf60f157fed524?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1765/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1765?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1765 +/- ## ======================================= Coverage 84.03% 84.03% ======================================= Files 94 94 Lines 14032 14032 ======================================= Hits 11792 11792 Misses 2240 2240 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1765?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1765?src=pr&el=footer). Last update [1c542df...c199d96](https://codecov.io/gh/huggingface/transformers/pull/1765?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>That looks great, thanks @adrianbg !
transformers
1,764
closed
Bug-fix: Roberta Embeddings Not Masked
## Summary I replace the code that makes the position ids with logic closer to the original fairseq `make_positions` function. It wasn't clear to me what to do in the event that the embeddings are passed in directly through `inputs_embeds` so I resorted to the old methodology and just generating a positional id for all inputs. ## Closes #1761
11-07-2019 18:05:01
11-07-2019 18:05:01
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=h1) Report > Merging [#1764](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a80778f40e4738071b5d01420a0328bb00cdb356?src=pr&el=desc) will **increase** coverage by `0.02%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1764/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1764 +/- ## ========================================== + Coverage 79.82% 79.84% +0.02% ========================================== Files 131 131 Lines 19496 19519 +23 ========================================== + Hits 15562 15585 +23 Misses 3934 3934 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1764/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `72.57% <100%> (+0.8%)` | :arrow_up: | | [transformers/tests/modeling\_roberta\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1764/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3JvYmVydGFfdGVzdC5weQ==) | `78.14% <100%> (+2.95%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=footer). Last update [a80778f...3e52915](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks! Any way you could do this PR with just the relevant changes (no whitespace changes, no extraneous commits)?<|||||>Hi, I have pushed just the relevant changes with a single commit. Many thanks, Dom<|||||>Thanks! We need to update the TF model as well. (this might be the reason behind the failing unit test, btw) Will do it unless you have the bandwidth (and the inclination) to do it yourself @DomHudson <|||||>@DomHudson Fixed a bug introduced by this PR's commit + rebased on top of current master. Please note that for most models, we expect users to define `attention_mask`s themselves. If I'm not mistaken, if a user defined an attention_mask that correctly excluded padding, he would not need the padding-aware position_ids from this PR.<|||||>LGTM merging
transformers
1,763
closed
Fixed training for TF XLNet
This PR makes the following changes and fully fixes `model.fit()` training for XLNet. It now works, and was tested with a slightly modified `run_tf_glue.py`, and also tested with XLA, AMP and tf.distribute - Fix dtype error with `input_mask` and `attention_mask` - `rel_attn_core()` does not work properly in non-eager mode, resulting in shape errors. `tf.shape` is now used to obtain the shape of `ac` instead of calling `ac.shape`, and training now works in non-eager mode. @thomwolf
11-07-2019 17:51:28
11-07-2019 17:51:28
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=h1) Report > Merging [#1763](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d49c43ff789d309e688fb7b252511e9e618e46db?src=pr&el=desc) will **decrease** coverage by `0.08%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1763/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1763 +/- ## ========================================== - Coverage 84.11% 84.03% -0.09% ========================================== Files 105 94 -11 Lines 15545 14032 -1513 ========================================== - Hits 13076 11792 -1284 + Misses 2469 2240 -229 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `87.82% <100%> (-0.37%)` | :arrow_down: | | [transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (-2.19%)` | :arrow_down: | | [transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `89.64% <0%> (-1.22%)` | :arrow_down: | | [transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.2% <0%> (-0.71%)` | :arrow_down: | | [transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `58.82% <0%> (-0.64%)` | :arrow_down: | | [transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `89.9% <0%> (-0.53%)` | :arrow_down: | | [transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGwucHk=) | `92.21% <0%> (-0.33%)` | :arrow_down: | | [transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.4% <0%> (-0.33%)` | :arrow_down: | | [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (-0.28%)` | :arrow_down: | | ... and [37 more](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=footer). Last update [d49c43f...41e0859](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This looks good to me, thank you @tlkh !<|||||>I see that the changes were already incorporated, so closing this PR.
transformers
1,762
closed
Perplexity for (not-stateful) Transformer - Why is it still fair to compare to RNN?
Greetings, It is clear to me how to compute perplexity for RNNs as RNNs are a stateful model. In general, given a very long document, I believe we need to: 1. chunk the document into a sequence of chunks, and we compute the cross entropy loss for each of the chunks, before taking the average for all chunks and take exponential. Because RNNs are a stateful model, we can use what we have from the last RNN hidden state regarding to previous chunk to initialize hidden state for RNN that handles the next chunk. Because of this, I think RNN computes the "right" perplexity for a long document. Meanwhile Transformer are not a stateful model. For this reason there is no such things as "transferring" hidden state from previous chunk to next chunk I guess. For this reason I think Transformers are not really computing the perplexity for a long document. It compute something different. So why is it still fair to compare two models regarding to perplexity for long documents? Plus, if I create a non-stateful RNN (e.g. as in Keras I can set stateful=False), will the perplexity the non-stateful model computes without "transferring" hidden state be the right perplexity we need? Is it comparable to the stateful RNN? Thank you!
11-07-2019 17:35:22
11-07-2019 17:35:22
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,761
closed
Roberta Positional Embeddings Not Masked
## Bug Hi, I think for the RoBERTa model, the positional embeddings are created slightly wrong. In both this library and in `fairseq` the positional embeddings are sequential integers starting from `padding_idx + 1`. However, in `fairseq.utils.make_positions` all indicies of the input that correspond to the padding index are masked as they go into the positional embedding layer. ## Example ```python import torch padding_idx = 0 tensor = torch.as_tensor([ [1, 13, 54, 0, 0], [1, 55, 12, 24, 0] ]) ``` ### In Fairseq ``` from fairseq import utils utils.make_positions(tensor, padding_idx = padding_idx) >>> tensor([ [1, 2, 3, 0, 0], [1, 2, 3, 4, 0] ]) ``` The positional sequence begins at padding_idx + 1 but all occurrences of the padding index in the input tensor are unmanipulated. This means the position embeddings are only adjusted for tokens which are not padding. ### In Transformers This code comes from [modeling_roberta.py#L65](https://github.com/huggingface/transformers/blob/master/transformers/modeling_roberta.py#L65) ```python # Position numbers begin at padding_idx+1. Padding symbols are ignored. # cf. fairseq's `utils.make_positions` position_ids = torch.arange(self.padding_idx+1, seq_length+self.padding_idx+1, dtype=torch.long, device=input_ids.device) position_ids = position_ids.unsqueeze(0).expand_as(input_shape) ``` The output tensor of these function calls do not retain the masked tokens: ``` >>> tensor([ [1, 2, 3, 4, 5], [1, 2, 3, 4, 5] ]) ``` ## Complete Example ```python from fairseq import utils tensor = torch.as_tensor([ [1, 13, 54, 0, 0], [1, 55, 12, 24, 0] ]) def fairseq_make_positions(tensor, padding_idx): mask = tensor.ne(padding_idx).int() return ( torch.cumsum(mask, dim=1).type_as(mask) * mask ).long() + padding_idx def transformers_make_positions(input_ids, padding_idx): input_shape = input_ids.size() seq_length = input_shape[1] position_ids = torch.arange(padding_idx+1, seq_length+padding_idx+1, dtype=torch.long, device=input_ids.device) position_ids = position_ids.unsqueeze(0).expand(input_shape) return position_ids print(fairseq_make_positions(tensor, padding_idx = 0)) print(transformers_make_positions(tensor, padding_idx = 0)) ``` Output: ``` # Fairseq: tensor([[1, 2, 3, 0, 0], [1, 2, 3, 4, 0]]) # Transformers: tensor([[1, 2, 3, 4, 5], [1, 2, 3, 4, 5]]) ``` ## Expected behavior I expect all padded tokens in the input tensor to be padded as they go into the positional embeddings layer as well as the token embeddings layer. ## Additional context Happy to make a PR if this is agreed to be a bug, but this will mean the positional embeddings train slightly differently to the existing `RobertaEmbeddings` module.
11-07-2019 14:48:01
11-07-2019 14:48:01
I think you are right. Awesome if you can fix this in a PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Closed in #1764
transformers
1,760
closed
'RuntimeError: CUDA error' is occured when encoding text with pre-trained model on cuda
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): I am trying to use Bert encoder Language I am using the model on (English, Chinese....): English There are two problems when I use BERT for encoding text to vectors Here is Error log First, ``` File "/home/jcw/VAIRLtext/Text2Vec.py", line 192, in encode embeddings = self.encoder(padded_tokens, attention_mask=masks)[0] File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 627, in forward head_mask=head_mask) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 348, in forward layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i]) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 326, in forward attention_outputs = self.attention(hidden_states, attention_mask, head_mask) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 283, in forward self_outputs = self.self(input_tensor, attention_mask, head_mask) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 202, in forward mixed_query_layer = self.query(hidden_states) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward return F.linear(input, self.weight, self.bias) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/functional.py", line 1374, in linear output += bias RuntimeError: CUDA error: device-side assert triggered ``` Second, ``` File "/home/jcw/VAIRLtext/Text2Vec.py", line 192, in encode embeddings = self.encoder(padded_tokens, attention_mask=masks)[0] File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 627, in forward head_mask=head_mask) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 348, in forward layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i]) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 326, in forward attention_outputs = self.attention(hidden_states, attention_mask, head_mask) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 283, in forward self_outputs = self.self(input_tensor, attention_mask, head_mask) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 202, in forward mixed_query_layer = self.query(hidden_states) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward return F.linear(input, self.weight, self.bias) File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/functional.py", line 1372, in linear output = input.matmul(weight.t()) RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` ``` These two errors are coming from BERT encode :( here is my code ``` class Text2Vec(object): def __init__(self, args): self.max_lens = args.max_lens self.tokenizer = AutoTokenizer.from_pretrained('bert-base-multilingual-cased') self.encoder = self.get_encoder('bert-base-multilingual-cased', args.num_layers) self.encoder.to('cuda') self.pad_token = self.tokenizer.convert_tokens_to_ids(self.tokenizer.pad_token) def get_encoder(self, encoder_type, num_layers): model = AutoModel.from_pretrained(encoder_type) model.eval() model.encoder.layer = torch.nn.ModuleList([layer for layer in model.encoder.layer[:num_layers]]) return model def padding(self, arr, pad_token): lens = torch.LongTensor([len(a) for a in arr]) padded = torch.ones(len(arr), self.max_lens) * pad_token mask = torch.zeros(len(arr), self.max_lens) for i, a in enumerate(arr): padded[i, :lens[i]] = torch.tensor(a) mask[i, :lens[i]] = 1 return padded, mask def encode(self, text): text = sent_tokenize(text) tokens = [self.tokenizer.encode(a, add_special_tokens=True) for a in text] padded_tokens, masks = self.padding(tokens, self.pad_token) padded_tokens = padded_tokens.to('cuda') masks = masks.to('cuda') self.encoder.eval() with torch.no_grad(): embeddings = self.encoder(padded_tokens, attention_mask=masks)[0] embed = [] masks = list(torch.split(masks, 1, dim=0)) embeddings = list(torch.split(embeddings, 1, dim=0)) for embedding, mask in zip(embeddings, masks): masked_embed = embedding * mask.squeeze()[:, None] embed.append(masked_embed) return torch.cat(embed, dim=0) ``` What am I missing? please advise to me!! ## Environment * OS: ubuntu 16.04 * Python version: Python 3.7.4 * PyTorch version: 1.3.0 * PyTorch Transformers version (or branch): 2.1.1 * Using GPU ? titan-xp * Distributed of parallel setup ? not yet * Any other relevant information:
11-07-2019 12:59:24
11-07-2019 12:59:24
> ## Bug > Model I am using (Bert, XLNet....): I am trying to use Bert encoder > > Language I am using the model on (English, Chinese....): English > > There are two problems when I use BERT for encoding text to vectors > Here is Error log > > First, > > ``` > File "/home/jcw/VAIRLtext/Text2Vec.py", line 192, in encode > embeddings = self.encoder(padded_tokens, attention_mask=masks)[0] > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ > result = self.forward(*input, **kwargs) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 627, in forward > head_mask=head_mask) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ > result = self.forward(*input, **kwargs) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 348, in forward > layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i]) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ > result = self.forward(*input, **kwargs) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 326, in forward > attention_outputs = self.attention(hidden_states, attention_mask, head_mask) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ > result = self.forward(*input, **kwargs) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 283, in forward > self_outputs = self.self(input_tensor, attention_mask, head_mask) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ > result = self.forward(*input, **kwargs) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 202, in forward > mixed_query_layer = self.query(hidden_states) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ > result = self.forward(*input, **kwargs) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward > return F.linear(input, self.weight, self.bias) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/functional.py", line 1374, in linear > output += bias > RuntimeError: CUDA error: device-side assert triggered > ``` > > Second, > > ``` > File "/home/jcw/VAIRLtext/Text2Vec.py", line 192, in encode > embeddings = self.encoder(padded_tokens, attention_mask=masks)[0] > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ > result = self.forward(*input, **kwargs) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 627, in forward > head_mask=head_mask) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ > result = self.forward(*input, **kwargs) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 348, in forward > layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i]) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ > result = self.forward(*input, **kwargs) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 326, in forward > attention_outputs = self.attention(hidden_states, attention_mask, head_mask) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ > result = self.forward(*input, **kwargs) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 283, in forward > self_outputs = self.self(input_tensor, attention_mask, head_mask) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ > result = self.forward(*input, **kwargs) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 202, in forward > mixed_query_layer = self.query(hidden_states) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ > result = self.forward(*input, **kwargs) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward > return F.linear(input, self.weight, self.bias) > File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/functional.py", line 1372, in linear > output = input.matmul(weight.t()) > RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` > ``` > > These two errors are coming from BERT encode :( > here is my code > > ``` > class Text2Vec(object): > def __init__(self, args): > self.max_lens = args.max_lens > self.tokenizer = AutoTokenizer.from_pretrained('bert-base-multilingual-cased') > self.encoder = self.get_encoder('bert-base-multilingual-cased', args.num_layers) > self.encoder.to('cuda') > self.pad_token = self.tokenizer.convert_tokens_to_ids(self.tokenizer.pad_token) > > def get_encoder(self, encoder_type, num_layers): > model = AutoModel.from_pretrained(encoder_type) > model.eval() > model.encoder.layer = torch.nn.ModuleList([layer for layer in model.encoder.layer[:num_layers]]) > return model > > def padding(self, arr, pad_token): > lens = torch.LongTensor([len(a) for a in arr]) > padded = torch.ones(len(arr), self.max_lens) * pad_token > mask = torch.zeros(len(arr), self.max_lens) > for i, a in enumerate(arr): > padded[i, :lens[i]] = torch.tensor(a) > mask[i, :lens[i]] = 1 > return padded, mask > > def encode(self, text): > text = sent_tokenize(text) > tokens = [self.tokenizer.encode(a, add_special_tokens=True) for a in text] > padded_tokens, masks = self.padding(tokens, self.pad_token) > > padded_tokens = padded_tokens.to('cuda') > masks = masks.to('cuda') > > self.encoder.eval() > with torch.no_grad(): > embeddings = self.encoder(padded_tokens, attention_mask=masks)[0] > > embed = [] > masks = list(torch.split(masks, 1, dim=0)) > embeddings = list(torch.split(embeddings, 1, dim=0)) > for embedding, mask in zip(embeddings, masks): > masked_embed = embedding * mask.squeeze()[:, None] > embed.append(masked_embed) > > return torch.cat(embed, dim=0) > ``` > > What am I missing? please advise to me!! > > ## Environment > * OS: ubuntu 16.04 > * Python version: Python 3.7.4 > * PyTorch version: 1.3.0 > * PyTorch Transformers version (or branch): how do I check PyTorch Transformers version? > * Using GPU ? titan-xp > * Distributed of parallel setup ? not yet > * Any other relevant information: In order to check the Transformers version you can insert this line in your code: ``` import transformers print(transformers.__version__) ```<|||||>Hi! There was a few things to change on my side to get your code to run. 1 - when doing `torch.ones(...)`, it initializes a tensor of dtype `float` while it should be of dtype `long`, i had to change the following line in Text2Vec.padding(): ```py padded = torch.ones(len(arr), self.max_lens, dtype=torch.int64) * pad_token ``` 2 - it crashed on my side at first because I was on the pypi release which has a bug related to the initialization of `token_type_ids` when they're not given to the model (they're not put on GPU). I had to install from master with the following command: ``` pip install git+https://github.com/huggingface/transformers ``` Once you've done that it should work fine!
transformers
1,759
closed
NameError: name 'DUMMY_INPUTS' is not defined
## ❓ Can someone help me with resolving this error? <!-- A clear and concise description of the question. --> ![image](https://user-images.githubusercontent.com/10863620/68379756-7771c400-0174-11ea-8d8d-7b72159c555f.png)
11-07-2019 10:08:09
11-07-2019 10:08:09
> ## Can someone help me with resolving this error? > ![image](https://user-images.githubusercontent.com/10863620/68379756-7771c400-0174-11ea-8d8d-7b72159c555f.png) Without any other information, it's very difficult to understand which is the problem with your Jupyter Notebook! **Which is the content of your directory?** Moreover, **provide the steps to reproduce this bug!** Please, link your Jupyter Notebook and **provide more information** (e.g. OS, PyTorch version, TensorFlow version, Transformers version, etc.)<|||||>@TheEdoardo93 Below are the package version details which I'm using on Windows 10 64 bit machine: jupyter 1.0.0 tensorflow 2.0.0 torch 1.3.0+cpu torchvision 0.4.1+cpu transformers 2.1.1 I have trained the model on colab GPU version, now was trying to load the pre-trained model on my local machine. Link of the notebook: https://colab.research.google.com/drive/1URz5MkjZwH621zGG_89S2KahCCBK0Se8 It's working fine on colab, but when I'm trying to resume on my local machine from loading the trained model, I'm getting this error. Let me know if any more info needed.<|||||>Same problem here, using transformers in Tensorflow 2.0.0b0. Training is ok: ``` import tensorflow as tf import tensorflow_datasets from transformers import * tf.compat.v1.enable_eager_execution() tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = TFBertForSequenceClassification.from_pretrained('bert-base-cased') data = tensorflow_datasets.load('glue/mrpc') train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc') valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc') train_dataset = train_dataset.shuffle(100).batch(32).repeat(2) valid_dataset = valid_dataset.batch(64) optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=optimizer, loss=loss, metrics=[metric]) history = model.fit(train_dataset, epochs=2, steps_per_epoch=115, validation_data=valid_dataset, validation_steps=7) model.save_pretrained('/home/rubens_gmail_com/tf') Train for 115 steps, validate for 7 steps Epoch 1/2 115/115 [==============================] - 57s 494ms/step - loss: 0.5496 - accuracy: 0.7331 - val_loss: 0.4226 - val_accuracy: 0.8235 Epoch 2/2 115/115 [==============================] - 34s 291ms/step - loss: 0.2939 - accuracy: 0.8817 - val_loss: 0.4039 - val_accuracy: 0.8456 ```` But this line of code give the following error: ``` pytorch_model = BertForSequenceClassification.from_pretrained('/home/rubens_gmail_com/tf', from_tf=True,num_labels = 8) NameError Traceback (most recent call last) <ipython-input-19-ebb1b098c156> in <module> ----> 1 pytorch_model = BertForSequenceClassification.from_pretrained('/home/rubensvectomobile_gmail_com/tf', from_tf=True,num_labels = 8) ~/.local/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 357 try: 358 from transformers import load_tf2_checkpoint_in_pytorch_model --> 359 model = load_tf2_checkpoint_in_pytorch_model(model, resolved_archive_file, allow_missing_keys=True) 360 except ImportError as e: 361 logger.error("Loading a TensorFlow model in PyTorch, requires both PyTorch and TensorFlow to be installed. Please see " ~/.local/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py in load_tf2_checkpoint_in_pytorch_model(pt_model, tf_checkpoint_path, tf_inputs, allow_missing_keys) 199 200 if tf_inputs is None: --> 201 tf_inputs = tf.constant(DUMMY_INPUTS) 202 203 if tf_inputs is not None: NameError: name 'DUMMY_INPUTS' is not defined ``` PyTorch, Tensorflow, transformers are up to date, as well as nvidia-ml-py3 and fast.ai. I'm on GCP using Debian, Anaconda environment w/ 8 x V100. <|||||>Have same issue following the examples. Everything most recent version. I am on a MacBook Pro. Virtualenv environment. Python3. <|||||>I had the same issue. I think it was fixed with this commit - https://github.com/huggingface/transformers/pull/1509/commits/099358675899f759110ad8ccecc22c2fab9b1888. Perhaps try to re-install from source as @LysandreJik has mentioned in https://github.com/huggingface/transformers/issues/1532, or just follow the file path in the error and manually make the changes yourself. Works for me now. :) ``` 201 - tf_inputs = tf.constant(DUMMY_INPUTS) + tf_inputs = tf_model.dummy_inputs ```<|||||>As I mentioned here [issue](https://github.com/huggingface/transformers/issues/1810#issuecomment-553108898) I was able to make a workaround, as I'm using GCP: I installed `transformers` using: ``` conda install -c conda-forge transformers ``` In Python 3.7.4, then I added `DUMMY_INPUTS = [[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]]` after variable `logger` in `/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py`, because it was missing. It was interesting, because in GCP, transformers were not showing in `python3` , only in `sudo python3`<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,758
closed
the BertModel have the class "BertForTokenClassification", why XLNetModel don't have the class
## ❓ Questions & Help <!-- A clear and concise description of the question. -->
11-07-2019 08:39:47
11-07-2019 08:39:47
By reading the code, I find the BertModel have the class "BertForTokenClassification", but XLNetModel don't have the class "XLNetForTokenClassification". Why? Can someone explain this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,757
closed
Is it possible fine tune XLNet?
## ❓ Questions & Help Hello, is there currently a script or recommended process to fine tune XLNet? Currently I am writing my own and using https://mccormickml.com/2019/09/19/XLNet-fine-tuning/ as an inspiration but overall I am still pretty new at this. I managed to get it to run but it seems to be taking forever so just thought I'd check. Thanks! Any help would be great. <!-- A clear and concise description of the question. -->
11-07-2019 07:57:32
11-07-2019 07:57:32
I have tried to fine-tune xlnet on squad1.0 and squad2.0, although it's a few points lower than original results, but it works fine. This #1803 is for fine-tuning on squad2.0, I'm not familiar with other datasets.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,756
closed
Is it okay to define and use new model by using only a part of full GPT model blocks? Or anyone tried to do so?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi, As far as I know, the smallest GPT2 model that comes with pretrained weight has 768 hidden size with 12 transformer blocks. However, I'm considering to define 'smaller' GPT model with only 1 or 2 transformer blocks and load those weight from corresponding blocks in the pretrained GPT weight. This is basically because the original model size is too big to fit in my gpu environment. so I'm wondering if it will work or not. I think it would be at least better than start training from randomly initialized weight, but not sure about it. So I'd like to ask how you think. Thanks!
11-07-2019 07:18:14
11-07-2019 07:18:14
### Hi, As far as I remember from my experiments with GPT2, the GPT2 blocks will work on its own. What I mean with that is that **the model will produce reasonable outputs, even if you leave out some layers from the original model.** So your idea should work. What I'm not quite sure about is the performance of an ONLY 1-2 layers model. I would guess the model will produce valid sentences, however the quality might be not optimal. ### Another Tip: If you want to build this model, load the original GPT2 model from the lib and delete the layers you don't want manually. This is a relatively ugly solution but its very time efficient ). Also you dont have to define you own forward function etc.<|||||>Did you take a look at `distilgpt2`? It has 6-layer, 768-hidden, 12-heads, 82M parameters. cc @VictorSanh @LysandreJik <|||||>Hi! Indeed, if you're looking for a smaller GPT-2 model then DistilGPT-2 has half of the small GPT-2 layers. However, if you want to create your own GPT-2 model with an arbitrary number of layers, you can do so very easily with the configuration file: ```py from transformers import GPT2Model, GPT2Config config = GPT2Config(n_layer=2) model = GPT2Model(config) ``` This will define a model with only two layers. Please be aware that the weights will be randomly initialized if you go down this route, whereas the weights will already be trained if you use DistilGPT-2.<|||||>Thank you guys! Really helped me a lot. I changed the GPT2Config as @LysandreJik told me and loaded pretrained weight to those layers :)
transformers
1,755
closed
How to add weighted CrossEntropy loss in sequence classification task?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I am doing a sequence classification task base on run_ner.py. My dataset is an imbalenced dataset and apperantly 'O' label appears far more than other labels. I'm wondering if it is helpful to use weighted CrossEntropy loss. If the answer is 'yes', how to add weight to the loss function? Currently I hard code the weight in BertForSequenceClassification class (in modeling_bert.py). I feel it is not a smart way to do it. Do you have some advice on how to add the weight infomation? Many thanks.
11-07-2019 06:50:18
11-07-2019 06:50:18
Hi, no specific advice other than computing the loss your-self outside of the model's forward method.
transformers
1,754
closed
Out of Memory Error (OOM) only during evaluation phase of run_lm_finetuning.py and run_glue.py
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): roberta-large Language I am using the model on (English, Chinese....): English The problem arise when using: * the official example scripts: run_lm_finetuning.py and run_glue.py The tasks I am working on is: * my own task or dataset: my own dataset which I have put into the correct format for `run_lm_finetuning` and the `SST-2` task for `run_glue.py` ## To Reproduce When I use the official `run_lm_finetuning.py` and `run_glue.py` scripts with `roberta-large` on my own data, I run into out of memory (`OOM`) issues during `evaluation`. `Training completes successfully` with a `batch_size` of `1` and a `max_senquence_length` of `512`. However, when `evaluating` the 4th or 5th checkpoint, I get an `OOM` error. I tested using `distributed` training with `FP16` on `2 Tesla P100-SXM2 16 GB` and `2 Tesla P100-PCIE 12 GB` with the same results. I'm currently testing on `2 Tesla M40 24GB`; however, I don't always have access to these GPUs; so, this is untenable long term. ## Expected behavior Since it finishes training and evaluation of the first few checkpoints with the aforementioned setting, I would expect it to be able to evaluate every checkpoint without error. However, it seems that there is a bug in the evaluation code. I assume it is not properly clearing the memory after testing each checkpoint. ## Environment * OS: Ubuntu 18.04 * Python version: 3.6.8 * PyTorch version: 1.3 * PyTorch Transformers version (or branch): 2.1.1 * Using GPU Yes, Tesla M40 24GB, Tesla P100-SXM2 16 GB, Tesla P100-PCIE 12 GB * Distributed of parallel setup 2x of each separate GPU
11-06-2019 22:30:20
11-06-2019 22:30:20
Have you tried the following lines after the call to the `evaluate` function (in the loop over checkpoints)? ```python del model torch.cuda.empty_cache() ``` Let me know if this works.<|||||>@rlouf, I changed ``` # Evaluation results = {} if args.do_eval and args.local_rank in [-1, 0]: checkpoints = [args.output_dir] if args.eval_all_checkpoints: checkpoints = list(os.path.dirname(c) for c in sorted(glob.glob(args.output_dir + '/**/' + WEIGHTS_NAME, recursive=True))) logging.getLogger("transformers.modeling_utils").setLevel(logging.WARN) # Reduce logging logger.info("Evaluate the following checkpoints: %s", checkpoints) for checkpoint in checkpoints: global_step = checkpoint.split('-')[-1] if len(checkpoints) > 1 else "" prefix = checkpoint.split('/')[-1] if checkpoint.find('checkpoint') != -1 else "" model = model_class.from_pretrained(checkpoint) model.to(args.device) result = evaluate(args, model, tokenizer, prefix=prefix) result = dict((k + '_{}'.format(global_step), v) for k, v in result.items()) results.update(result) return results ``` to ``` # Evaluation results = {} if args.do_eval and args.local_rank in [-1, 0]: checkpoints = [args.output_dir] if args.eval_all_checkpoints: checkpoints = list(os.path.dirname(c) for c in sorted(glob.glob(args.output_dir + '/**/' + WEIGHTS_NAME, recursive=True))) logging.getLogger("transformers.modeling_utils").setLevel(logging.WARN) # Reduce logging logger.info("Evaluate the following checkpoints: %s", checkpoints) for checkpoint in checkpoints: global_step = checkpoint.split('-')[-1] if len(checkpoints) > 1 else "" prefix = checkpoint.split('/')[-1] if checkpoint.find('checkpoint') != -1 else "" model = model_class.from_pretrained(checkpoint) model.to(args.device) result = evaluate(args, model, tokenizer, prefix=prefix) del model torch.cuda.empty_cache() result = dict((k + '_{}'.format(global_step), v) for k, v in result.items()) results.update(result) return results ``` which didn't help. So, I looked through https://github.com/huggingface/transformers/issues/1742 and added ``` del model gc.collect() torch.cuda.empty_cache() ``` This didn't work either; so, I change the scheduler to the following while leaving ``` del model gc.collect() torch.cuda.empty_cache() ``` ``` def warmup_linear_schedule(optimizer, warmup_steps, t_total, last_epoch=-1): def lr_lambda(step): if step < warmup_steps: return float(step) / float(max(1, warmup_steps)) return max(0.0, float(t_total - step) / float(max(1.0, t_total - warmup_steps))) return LambdaLR(optimizer, lr_lambda, last_epoch=-1) ``` This hasn't worked either. Any ideas? It doesn't seem like I can `del model`<|||||>Hi! Changing the scheduler should not do anything since you are in evaluation mode. This is likely not due to the model either, since `del` + `gc.collect()` should remove it from memory. Could you look at by how much the memory increases after each step?<|||||>So, when using `roberta-large` the memory increases by ~2.25 GB after each checkpoint starting at ~7GB.<|||||>Can you have a look at which tensors are still being tracked, using [this code](https://github.com/huggingface/transformers/issues/1742#issuecomment-550327172) after your del statement?<|||||>My apologies for the delay in my answer. I had hoped that https://github.com/huggingface/transformers/commit/9629e2c676efab31d1f27a3f27d811e91fd05a92 would solve my issue; however, evaluation doesn't appear to be distributed. So, it is still using one card and is still ~2.25 GB after each checkpoint. Should I open up a separate issue for the distributed evaluation issue? After adding ``` def print_gpu_obj(): import gc GPU_count = 0 Pinned_count = 0 for tracked_object in gc.get_objects(): if torch.is_tensor(tracked_object): if tracked_object.is_cuda: GPU_count+=1 if tracked_object.is_pinned(): Pinned_count+=1 print("There are {} cuda objects".format(GPU_count)) print("There are {} pinned objects".format(Pinned_count)) ``` to the code, I get before `del model` checkpoint 0 ``` There are 1087 cuda objects There are 0 pinned objects ``` checkpoint 1 ``` There are 1673 cuda objects There are 0 pinned objects ``` checkpoint 2 ``` There are 2259 cuda objects There are 0 pinned objects ``` checkpoint 3 ``` There are 2845 cuda objects There are 0 pinned objects ``` after `del model` and before `gc.collect` checkpoint 0 ``` There are 984 cuda objects There are 0 pinned objects ``` checkpoint 1 ``` There are 1570 cuda objects There are 0 pinned objects ``` checkpoint 2 ``` There are 2156 cuda objects There are 0 pinned objects ``` checkpoint 3 ``` There are 2742 cuda objects There are 0 pinned objects ``` after `gc.collect` and before `torch.cuda.empty_cache()` and after `torch.cuda.empty_cache()` are both the same as after `del model` and before `gc.collect`. It appears that 586 objects are added at each checkpoint, which cannot be deleted and unallocated. Any suggestions?<|||||>This looks like some data is kept in memory after evaluation. Any way you could try to track this down?<|||||>My apologies for the delay in getting back to this, as well as my ignorance on this subject, but how would I track this down. As in what commands or tools should I use? Again sorry for the late reply, I got stuck on another project for a bit.<|||||>I have a similar issue. During training with run_lm_finetuning.py, the memory usage of a single GPU is 30G/32G. And after the training stage, I mean at the beginning of eval, the memory doesn't drop down and the evaluation stage is always getting OOM. If I add torch.cuda.empty_cache() before evaluation, the memory drops down to 22G, which means there are still 22GB tensors exist but I don't know where they are and how to free them. If I do the evaluation with run_lm_finetuning.py without training, everything is ok and the memory usage is about 8G.<|||||>> I have a similar issue. During training with run_lm_finetuning.py, the memory usage of a single GPU is 30G/32G. And after the training stage, I mean at the beginning of eval, the memory doesn't drop down and the evaluation stage is always getting OOM. If I add torch.cuda.empty_cache() before evaluation, the memory drops down to 22G, which means there are still 22GB tensors exist but I don't know where they are and how to free them. If I do the evaluation with run_lm_finetuning.py without training, everything is ok and the memory usage is about 8G. PS: If I set --evaluate_during_training, there is no OOM issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,753
closed
Added Mish Activation Function
Mish is a new activation function proposed here - https://arxiv.org/abs/1908.08681 It has seen some recent success and has been adopted in SpaCy, Thic, TensorFlow Addons and FastAI-dev. All benchmarks recorded till now (including against ReLU, Swish and GELU) is present in the repository - https://github.com/digantamisra98/Mish Might be a good addition to experiment with especially in the Bert Model.
11-06-2019 22:16:25
11-06-2019 22:16:25
Ok, why not.
transformers
1,752
closed
Subtokens in BPE in GPT2
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi, I have a question. For a word represented by more than 1 subtoken in BPE, I wonder what's the principle to divide it into subtokens? I use GPT2 small version for example. "bookshelf" is divided into "books","he" and "if".(index for encoder is [3835, 258, 1652]). But actually "book" and "shelf" are represented by single subtoken.(book is 1492, shelf is 18316). Why is "bookshelf" divided into 3 parts instead of "book" and "shelf"? Thank you!
11-06-2019 21:20:03
11-06-2019 21:20:03
BPE is a data compression algorithm, the result depends on the statistical properties of the corpus of text on which it has been trained. You can start with this article https://leimao.github.io/blog/Byte-Pair-Encoding/ to get more of an intuition of how it works.<|||||>> BPE is a data compression algorithm, the result depends on the statistical properties of the corpus of text on which it has been trained. > > You can start with this article https://leimao.github.io/blog/Byte-Pair-Encoding/ to get more of an intuition of how it works. Thank you! That helps!
transformers
1,751
closed
input token embedding issues
1. During fine-tuning an XLNet, does the word embeddings kept constant (learned from the pre-trained model) or are they initialized randomly but learned contextually during fine-tuning itself? 2. And how would I set my own word embeddings during fine-tuning (XLNetForSequenceClassification)? I have my own token embeddings that I want to feed for my XLNetForSequenceClassification and is it possible to keep this embeddings constant during fine-tuning? get_input_embeddings and set_input_embeddings functions does not seem to work with XLNetForSequenceClassification Class. Thanks!
11-06-2019 18:57:53
11-06-2019 18:57:53
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,750
closed
How to calculate memory requirements of different GPT models?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> What's the best way to calculate how much GPU ram is needed to finetune a given GPT size? The GPT-xl model is 6.5GB worth of parameters but clearly this isn't the full story when it comes to fine-tuning. I've tried running a batch size of 1 on a 11GB 1080ti but that ran out of memory. My next step is to test it on a 2080ti with FP16 training enabled to further reduce memory requirements. It would be great if there was some back of the envelope calculation to see in advance if this would work or if instead I should go straight to 16GB Titan V. Or maybe even that's too small!
11-06-2019 18:07:58
11-06-2019 18:07:58
Current commercial GPU VRAM sizes are not large enough to process the XL model. You'll need probably around 32-64 GB of RAM in order to finetune successfully. So for now, you'll have to train on a CPU with a lot of memory attached.<|||||>How did you come to that number though? In Thomas Wolf's medium post on training large models he mentioned gradient checkpointing for models that couldn't fit a batch onto a single gpu. Is it possible to implement something like for GPT-2 and friends?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,749
closed
gpt2 generation crashes when using `past` for some output lengths
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): gpt2 Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [x] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details) I am using `run_generation.py` to generate text from gpt2, but with the code slightly changed to make use of `past` to cache hidden states. It crashes in some cases. ## To Reproduce Steps to reproduce the behavior: Apply the three line change to the `run_generation.py` script: ``` $ git checkout f88c104d8 . # current head of master $ git diff diff --git a/examples/run_generation.py b/examples/run_generation.py index 2d91766..bfbf68a 100644 --- a/examples/run_generation.py +++ b/examples/run_generation.py @@ -113,9 +113,10 @@ def sample_sequence(model, length, context, num_samples=1, temperature=1, top_k= context = context.unsqueeze(0).repeat(num_samples, 1) generated = context with torch.no_grad(): + past = None for _ in trange(length): - inputs = {'input_ids': generated} + inputs = {'input_ids': generated, 'past': past} if is_xlnet: # XLNet is a direct (predict same token, not next token) and bi-directional model by default # => need one additional dummy token in the input (will be masked), attention mask and target mapping (see model docstring) @@ -136,6 +137,7 @@ def sample_sequence(model, length, context, num_samples=1, temperature=1, top_k= inputs["langs"] = torch.tensor([xlm_lang] * inputs["input_ids"].shape[1], device=device).view(1, -1) outputs = model(**inputs) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet/CTRL (cached hidden-states) + past = outputs[1] next_token_logits = outputs[0][:, -1, :] / (temperature if temperature > 0 else 1.) # repetition penalty from CTRL (https://arxiv.org/abs/1909.05858) ``` ``` $ python examples/run_generation.py --prompt "Who was Jim Henson ? Jim Henson was a" --model_type gpt2 --model_name_or_path gpt2 --length 40 ... 11/06/2019 11:38:36 - INFO - __main__ - Namespace(device=device(type='cpu'), length=40, model_name_or_path='gpt2', model_type='gpt2', n_gpu=0, no_cuda=False, num_samples=1, padding_text='', prompt='Who was Jim Henson ? Jim Henson was a', repetition_penalty=1.0, seed=42, stop_token=None, temperature=1.0, top_k=0, top_p=0.9, xlm_lang='') 88%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 35/40 [00:04<00:00, 8.68it/s] Traceback (most recent call last): File "examples/run_generation.py", line 262, in <module> main() File "examples/run_generation.py", line 247, in main device=args.device, File "examples/run_generation.py", line 139, in sample_sequence outputs = model(**inputs) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet/CTRL (cached hidden-states) File "/Users/joelb/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/Users/joelb/views/transformers/transformers/modeling_gpt2.py", line 546, in forward inputs_embeds=inputs_embeds) File "/Users/joelb/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/Users/joelb/views/transformers/transformers/modeling_gpt2.py", line 438, in forward position_embeds = self.wpe(position_ids) File "/Users/joelb/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/Users/joelb/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/Users/joelb/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/functional.py", line 1484, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range: Tried to access index 1024 out of table with 1023 rows. at /Users/distiller/project/conda/conda-bld/pytorch_1570710797334/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418 ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> The crash does not occur with the default `--length 20`. I'm trying to make this work for faster generation speed. So I tested the change with a smaller length that does not crash, but I observe the speed with caching is slightly slower than without. ## Environment ``` Platform Darwin-18.7.0-x86_64-i386-64bit Python 3.7.4 (default, Aug 13 2019, 15:17:50) [Clang 4.0.1 (tags/RELEASE_401/final)] PyTorch 1.3.0 Tensorflow 2.0.0 ``` ## Additional context <!-- Add any other context about the problem here. -->
11-06-2019 16:43:41
11-06-2019 16:43:41
@joelb-git Can you check the lowest ```length``` at which this occurs? Also, is this an issue that only occurs when ```past``` is used? Can you check if same thing happens when running ```python examples/run_generation.py --prompt "Who was Jim Henson ? Jim Henson was a" --model_type gpt2 --model_name_or_path gpt2 --length 40``` on the unchanged ```run_generation.py``` file<|||||>@enzoampil > @joelb-git Can you check the lowest length at which this occurs? `--length 36` is the smallest length that produces the failure. > Also, is this an issue that only occurs when past is used? Yes. Also, the results when using `past` when it does not crash are different from without using `past`. Without `past`: ``` $ python examples/run_generation.py --prompt "Who was Jim Henson ? Jim Henson was a" --model_type gpt2 --model_name_or_path gpt2 --length 20 man of God. Jim Henson was a man who had built his family into this nation. His ``` With `past`: ``` $ python examples/run_generation.py --prompt "Who was Jim Henson ? Jim Henson was a" --model_type gpt2 --model_name_or_path gpt2 --length 20 man of almostWho was Jim Henson? Jim Henson was a man of almostWho was Jim ``` > Can you check if same thing happens when running > > python examples/run_generation.py --prompt "Who was Jim Henson ? Jim Henson was a" --model_type gpt2 --model_name_or_path gpt2 --length 40 > > on the unchanged run_generation.py file On the unchanged file, this does not crash. I also note a documentation issue: https://huggingface.co/transformers/model_doc/gpt2.html ``` Outputs: past: list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length) ... ``` The shape that I observe has an extra prefixed dimension: ``` (2, batch_size, num_heads, sequence_length, sequence_length) ``` <|||||>I have the exact same issue (although I am using distilgpt2). After some experimenting I found out that the generation (starting from BOS, i.e. without a prompt) always crashes when the generated sequence goes from length 44 to 45. At this point, I suspect that the hidden states in past are concatenated. For the sum of all numbers from 1 to 44 is 990. So if you add 45 again, you exceed the maximum number of positions in GPT2 (1024). This would also explain why your results with and without past are different @joelb-git . Also it seems that the prompt ("Who was Jim Henson? ...") is repeated in the output when past is used. That might also be an artifact of concatenation?<|||||>I also get a similar issue. I suspect the error has something to do with the past attention weights being added to the present - since the error is a matrix multiplication shape mismatch. The issue may lay in the modelling_gpt2.py starting on line 181. ``` x = self.c_attn(x) query, key, value = x.split(self.split_size, dim=2) query = self.split_heads(query) key = self.split_heads(key, k=True) value = self.split_heads(value) if layer_past is not None: past_key, past_value = layer_past[0].transpose(-2, -1), layer_past[1] # transpose back cf below key = torch.cat((past_key, key), dim=-1) value = torch.cat((past_value, value), dim=-2) present = torch.stack((key.transpose(-2, -1), value)) **attn_outputs = self._attn(query, key, value, attention_mask, head_mask)** ``` Where the error occurs in the method I have wrapped with **. Going into the method, the error occurs in the line alsop marked with **: ``` def _attn(self, q, k, v, attention_mask=None, head_mask=None): w = torch.matmul(q, k) if self.scale: w = w / math.sqrt(v.size(-1)) nd, ns = w.size(-2), w.size(-1) b = self.bias[:, :, ns-nd:ns, :ns] **w = w * b - 1e4 * (1 - b)** ``` The error occurs because this code tried to multiply **'w'** of shape ([1, 12, 1024, 2048]) by **'b'** of shape ([1, 1, 0, 1024]). However, I am unsure what are the appropiate shapes thus cannot provide the solution. Hope this helps. <|||||>@adigoryl This is definitely related. One can run this very simple script to reproduce the error: ``` import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2') model = GPT2LMHeadModel.from_pretrained('distilgpt2') device = 'cuda:0' model.to(device) input_ids = torch.tensor(tokenizer.encode( "<|endoftext|>")).unsqueeze(0).to(device) inputs = {'input_ids': input_ids} with torch.no_grad(): past = None for i in range(45): print(i) logits, past = model(**inputs, past=past) logits = logits[0, -1] next_token = logits.argmax() input_ids = torch.cat([input_ids, next_token.view(1, 1)], dim=1) inputs = {'input_ids': input_ids} print(tokenizer.decode(input_ids[0].tolist())) ``` It works fine for `range(44)` but at `range(45)` it crashes with the error @adigoryl describes: ``` Traceback (most recent call last): File "test.py", line 17, in <module> logits, past = model(**inputs, past=past) File "/mounts/Users/martin/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/mounts/Users/martin/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 566, in forward head_mask=head_mask) File "/mounts/Users/martin/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/mounts/Users/martin/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 470, in forward head_mask=head_mask[i]) File "/mounts/Users/martin/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/mounts/Users/martin/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 250, in forward head_mask=head_mask) File "/mounts/Users/martin/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/mounts/Users/martin/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 211, in forward attn_outputs = self._attn(query, key, value, attention_mask, head_mask) File "/mounts/Users/martin/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 154, in _attn w = w * b - 1e4 * (1 - b) RuntimeError: The size of tensor a (1035) must match the size of tensor b (1024) at non-singleton dimension 3 ``` `w` having length 1035 (which is the sum of all numbers from 1 to 45) also made me think that there is some concatenation going on that shouldn't be happening.<|||||>As I suspect a wrong idea about the interface and thus some concatenation going on that shouldn't be happening, I modified my example script in the following way: ``` import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2') model = GPT2LMHeadModel.from_pretrained('distilgpt2') device = 'cuda:0' model.to(device) input_ids = torch.tensor(tokenizer.encode( "<|endoftext|>")).unsqueeze(0).to(device) inputs = {'input_ids': input_ids} with torch.no_grad(): past = None for i in range(45): print(i) logits, past = model(**inputs, past=past) logits = logits[0, -1] next_token = logits.argmax().view(1, 1) input_ids = torch.cat([input_ids, next_token], dim=1) inputs = {'input_ids': next_token} print(tokenizer.decode(input_ids[0].tolist())) ``` So only `next_token` is given as an input to the gpt2 model. This permits me to generate sequences of arbitrary length while using the `past` parameter. The only thing I would still really like to know is where this "bug" came from in the first place: do we as users have a wrong idea about the interface and should we use it in a way where `past` and `input_ids` do not overlap? Or were we right about the interface from the start and there really *is* a bug? Surprisingly, both versions give me natural results when presented with the prompt `"<|endoftext|>Who was Jim Henson? Jim Henson was a"`: Without modification: `Who was Jim Henson? Jim Henson was a great father and a great father.` With my modification: `Who was Jim Henson? Jim Henson was a great actor, and he was a great actor. He was a great actor. He was a great` The same when using `gpt2` instead of `distilgpt2`: Without modification: `Who was Jim Henson? Jim Henson was a writer and director of the film "The Last Man on Earth" and the sequel to the film "` With modification: `Who was Jim Henson? Jim Henson was a writer, director, producer, and producer of the hit television series "The Henson Report." He` So how are we supposed to use it?<|||||>Hi, when the past is computed for certain tokens these tokens should not be passed as input ids. @mnschmit your last example displays correct usage of the `past` argument. I've clarified the documentation in d409aca, please let me know if it should be clarified further. Thank you for looking into it. <|||||>@LysandreJik great, thank you! With the extended documentation, it would be clear to me now but, additionally, a usage example might be useful for new users. Feel free to put my short example (or parts of it) at some appropriate place if you like!<|||||>You're right, having a snippet showing usage would definitely improve user experience on this point. I'll update the documentation later today, thank you!<|||||>I've added a section in the [`quickstart` section of the documentation](https://huggingface.co/transformers/quickstart.html#using-the-past). Feel free to re-open if it's still not clear enough.<|||||>That's a great new part of the documentation. Thank you!<|||||>Yes, indeed, thanks for that doc change! And thanks for all who helped diagnose. I applied the suggestions to my local version of `run_generation.py`. It no longer crashes for large generation lengths, and generation is indeed now faster (about 1.7 times faster in my tests).<|||||>Here is the relevant portion of the modified `run_generation.py` script, changed for gpt2 only: ``` def sample_sequence(model, length, context, num_samples=1, temperature=1, top_k=0, top_p=0.0, repetition_penalty=1.0, device='cpu'): context = torch.tensor(context, dtype=torch.long, device=device) context = context.unsqueeze(0).repeat(num_samples, 1) generated = context.clone().detach() past = None with torch.no_grad(): for _ in range(length): output, past = model(context, past=past) next_token_logits = output[:, -1, :] filtered_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p) next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=1) generated = torch.cat((generated, next_token), dim=1) # when using `past`, the context for the next call should be only # the previous token context = next_token return generated ```<|||||>I encouter similar issue, and find the problem is cased by input_ids and length, beacuse wrongly use of special token [CLS] when tokenize input_ids <|||||>I am trying a BERT-GPT2(not BART) architecture in which I am passing BERT's hidden states of 12 layers to past. After reading doc's I find that past should be of different shape as compared to shape of hidden states from BERT. Since I need BERT's encoding layer for encoding and GPT2 decoding function, tell me how should I manage it.
transformers
1,748
closed
Released OpenAI GPT-2 1.5B model
## 🚀 Feature - [ ] Blog: [https://openai.com/blog/gpt-2-1-5b-release/](url) - [ ] Code: [https://github.com/openai/gpt-2](url) - [ ] Dataset: [https://github.com/openai/gpt-2-output-dataset](url) ## Motivation A bigger model for text generation which is more human than the OpenAI GPT-2 774M parameters ## Additional context Maybe this is not the most requested feature by AI researchers. Maybe, it's better to implement and standardize **new** NLG models!
11-06-2019 16:31:05
11-06-2019 16:31:05
Hi, GPT-2 XL was added yesterday with commit d7d36181fdefdabadc53adf51bed4a2680f5880a<|||||>Yes, sorry for opening a new issue! I didn't see this commit. I'll close the issue! Thank you for your time!
transformers
1,747
closed
How to use gpt-2-xl with run_generation.py
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I am currently running the run_generation.py and putting the model as gpt-2, however I am not satisfied with the results frm the small model. How can I insert the big model instead?
11-06-2019 16:11:15
11-06-2019 16:11:15
You're probably using the command: ``` python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2 ``` to use GPT2-XL for generation you would change the last argument to `gpt2-xl`: ``` python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2-xl ```<|||||>ah. its because the latest version isnt available on pypi yet. and i pip installed it. 2.1.1 is the version I've got installed. Is there any way to force pip to install the latest version that was commited 21 hours ago.<|||||>Ah yes we're yet to release a pypi version with GPT2-XL. To install from source you would have to do: ``` pip install git+https://github.com/huggingface/transformers ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||> pip install transformers==3.4.0 this version of transforrmers solved my problem: OSError: Model name 'ckiplab/gpt2-base-chinese' was not found in model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed 'ckiplab/gpt2-base-chinese' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.
transformers
1,746
closed
Fixing models inputs_embeds
11-06-2019 15:57:37
11-06-2019 15:57:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=h1) Report > Merging [#1746](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f88c104d8f79e78a98c8ce6c1f4a78db73142eab?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `91.3%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1746/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1746 +/- ## ========================================== + Coverage 84.03% 84.03% +<.01% ========================================== Files 94 94 Lines 14021 14032 +11 ========================================== + Hits 11782 11792 +10 - Misses 2239 2240 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/tests/conftest.py](https://codecov.io/gh/huggingface/transformers/pull/1746/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL2NvbmZ0ZXN0LnB5) | `93.33% <100%> (+3.33%)` | :arrow_up: | | [transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1746/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.84% <100%> (+0.01%)` | :arrow_up: | | [transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1746/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.77% <75%> (ø)` | :arrow_up: | | [transformers/tests/modeling\_bert\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1746/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2JlcnRfdGVzdC5weQ==) | `96.5% <91.66%> (-0.43%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=footer). Last update [f88c104...fd29a30](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
1,745
closed
How to stop wordpiece-tokenizing in BertTokenizer?
Default BertTokenizer may split one word into two parts or three parts, which is harmful to token labeling task because it will make sentence length lager than input length. It is difficult to fuse these parts back to one representation. So, how to stop this operation in BertTokenizer? Or there are alternative Tokenizers without spliting words?
11-06-2019 14:07:05
11-06-2019 14:07:05
No, that's how the model works. Short of creating your own model (with you own custom tokenizer) there's no way to do what you want to do.<|||||>Okkkkkk, it is unreasonable ....
transformers
1,744
closed
F1 socre is zero while loss is about 0.12xx when using run_ner.py to fine tuning bert model
## ❓ Questions & Help I want to use run_ner.py to do the sequence labeling work, and I make a few class of labels(e.g., B-School_name), then I annotate a few data (about 100 annotations for test) from dataset. Finally, I use run_ner.py to do the fine tuning job, but the f1 score and precision are both zero (but the loss is not). Have I got something wrong?
11-06-2019 13:57:22
11-06-2019 13:57:22
Hi @Sunnycheey I had the same issue when using my own data and labels. However, I fixed this once I ensured that the training data (train.txt) was in the correct format: WordToken/space/label, newline after every sentence (fullstop). While O visiting O Otonga B-Sc primary I-Sc he O noticed O an O anomaly O . O The O next O sentence O etc... <|||||>Thanks for your reply. Is the fullstop matters? I use spacy to get sentences from document, so there may not be fullstop in the end of sentence... BTW, I have used training dataset to test the model trained, and the f1 score is 1 (not 0)...<|||||>I got it to work with fullstop and newline as above. I use stanfordnlp to prepare the data, but it is probably fine to use SpaCy. Make sure you have a separate folder for the output too.<|||||>@jensam It turns out if all the predict result is O (in BIO annotation scheme), the f1 score tends to be 0. Thanks anyway.
transformers
1,743
closed
Unlimited sequence length in Bert for QA
11-06-2019 10:00:11
11-06-2019 10:00:11
transformers
1,742
closed
Out of Memory (OOM) when repeatedly running large models
## ❓ Any advice for freeing up GPU memory after training a large model (e.g., roberta-large)? ### System Info ``` Platform Linux-4.4.0-1096-aws-x86_64-with-debian-stretch-sid Python 3.6.6 |Anaconda, Inc.| (default, Jun 28 2018, 17:14:51) [GCC 7.2.0] PyTorch 1.3.0 AWS EC2 p3.2xlarge (single GPU) ``` My current objective is to run `roberta-large` multiple training jobs sequentially from the same python script (i.e., for a simple HPO search). Even after deleting all of the objects and clearing the cuda cache after the first training job ends I am still stuck with 41% GPU memory usage (as compared to 15% before starting the training process). I am attaching a reproducible example to get the error: ### Here is the relevant code ```python show_gpu('Initial GPU memory usage:') for i in range(2): model, optimizer, scheduler = get_training_obj(params) show_gpu(f'{i}: GPU memory usage after loading training objects:') for epoch in range(1): epoch_start = time.time() model.train() for batch in dp.train_dataloader: xb,mb,_,yb = tuple(t.to(params['device']) for t in batch) outputs = model(input_ids = xb, attention_mask = mb, labels = yb) loss = outputs[0] loss.backward() optimizer.step() scheduler.step() optimizer.zero_grad() show_gpu(f'{i}: GPU memory usage after training model:') del model, optimizer, scheduler, loss, outputs torch.cuda.empty_cache() torch.cuda.synchronize() show_gpu(f'{i}: GPU memory usage after clearing cache:') ``` ### Here is the output and full traceback ``` Initial GPU memory usage: 0.0% (0 out of 16130) 0: GPU memory usage after loading training objects: 14.7% (2377 out of 16130) 0: GPU memory usage after training model: 70.8% (11415 out of 16130) 0: GPU memory usage after clearing cache: 41.8% (6741 out of 16130) 1: GPU memory usage after loading training objects: 50.2% (8093 out of 16130) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-7-20a3bdec1bf4> in <module>() 8 for batch in dp.train_dataloader: 9 xb,mb,_,yb = tuple(t.to(params['device']) for t in batch) ---> 10 outputs = model(input_ids = xb, attention_mask = mb, labels = yb) 11 loss = outputs[0] 12 loss.backward() ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, labels) 326 token_type_ids=token_type_ids, 327 position_ids=position_ids, --> 328 head_mask=head_mask) 329 sequence_output = outputs[0] 330 logits = self.classifier(sequence_output) ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask) 179 token_type_ids=token_type_ids, 180 position_ids=position_ids, --> 181 head_mask=head_mask) 182 183 ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask) 625 encoder_outputs = self.encoder(embedding_output, 626 extended_attention_mask, --> 627 head_mask=head_mask) 628 sequence_output = encoder_outputs[0] 629 pooled_output = self.pooler(sequence_output) ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask) 346 all_hidden_states = all_hidden_states + (hidden_states,) 347 --> 348 layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i]) 349 hidden_states = layer_outputs[0] 350 ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask) 326 attention_outputs = self.attention(hidden_states, attention_mask, head_mask) 327 attention_output = attention_outputs[0] --> 328 intermediate_output = self.intermediate(attention_output) 329 layer_output = self.output(intermediate_output, attention_output) 330 outputs = (layer_output,) + attention_outputs[1:] # add attentions if we output them ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, hidden_states) 298 def forward(self, hidden_states): 299 hidden_states = self.dense(hidden_states) --> 300 hidden_states = self.intermediate_act_fn(hidden_states) 301 return hidden_states 302 ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in gelu(x) 126 Also see https://arxiv.org/abs/1606.08415 127 """ --> 128 return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0))) 129 130 def gelu_new(x): RuntimeError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 15.75 GiB total capacity; 14.24 GiB already allocated; 8.88 MiB free; 476.01 MiB cached) ``` ### Appendix / complete code to reproduce ```python import platform; print("Platform", platform.platform()) import sys; print("Python", sys.version) import torch; print("PyTorch", torch.__version__) from __future__ import absolute_import, division, print_function import glob import logging import os import time import json import random import numpy as np import pandas as pd from random import sample, seed import torch from torch.utils.data import DataLoader, RandomSampler, SequentialSampler,TensorDataset from torch.utils.data.distributed import DistributedSampler from transformers import RobertaConfig, RobertaForSequenceClassification, RobertaTokenizer from transformers import DistilBertConfig, DistilBertForSequenceClassification, DistilBertTokenizer from transformers import BertConfig, BertForSequenceClassification, BertTokenizer from transformers import AdamW, WarmupLinearSchedule from transformers import glue_compute_metrics as compute_metrics from transformers import glue_output_modes as output_modes from transformers import glue_processors as processors from transformers import glue_convert_examples_to_features as convert_examples_to_features import subprocess params = { 'num_epochs': 2, 'warmup_ratio': 0.06, 'weight_decay': 0.1, 'adam_epsilon': 1e-6, 'model_name': 'roberta-large', 'max_grad_norm': 1.0, 'lr': 2e-5, 'bs': 32, 'device': 'cuda', 'task': 'cola', 'data_dir': '/home/ubuntu/glue_data/CoLA', 'max_seq_length': 50, 'metric_name': 'mcc', 'patience': 3, 'seed': 935, 'n': -1, } class DataProcessor(): '''Preprocess the data, store data loaders and tokenizer''' _TOKEN_TYPES = { 'roberta': RobertaTokenizer, 'distilbert': DistilBertTokenizer, 'bert': BertTokenizer, } def __init__(self, params): model_type = params['model_name'].split('-')[0] assert model_type in self._TOKEN_TYPES.keys() self.tok = self._TOKEN_TYPES[model_type] self.params = params self.processor = processors[self.params['task']]() self.output_mode = output_modes[self.params['task']] self.label_list = self.processor.get_labels() @staticmethod def _convert_to_tensors(features): all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long) all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long) all_labels = torch.tensor([f.label for f in features], dtype=torch.long) return TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels) def _load_examples(self, tokenizer, evaluate): if evaluate: examples = self.processor.get_dev_examples(self.params['data_dir']) else: examples = self.processor.get_train_examples(self.params['data_dir']) if self.params['n'] >= 0: examples = sample(examples, self.params['n']) features = convert_examples_to_features(examples, tokenizer, label_list=self.label_list, max_length=self.params['max_seq_length'], output_mode=self.output_mode, pad_on_left=False, pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0], pad_token_segment_id=0) return self._convert_to_tensors(features) def _define_tokenizer(self): return self.tok.from_pretrained(self.params['model_name'], do_lower_case=True) def load_data(self): tokenizer = self._define_tokenizer() self.train_data = self._load_examples(tokenizer, False) self.valid_data = self._load_examples(tokenizer, True) self.train_n = len(self.train_data) self.valid_n = len(self.valid_data) self.params['total_steps'] = self.params['num_epochs'] * self.train_n return self.params def create_loaders(self): self.train_dataloader = DataLoader(self.train_data, shuffle=True, batch_size=self.params['bs']) self.valid_dataloader = DataLoader(self.valid_data, shuffle=False, batch_size=2*self.params['bs']) dp = DataProcessor(params) params = dp.load_data() dp.create_loaders() def show_gpu(msg): """ ref: https://discuss.pytorch.org/t/access-gpu-memory-usage-in-pytorch/3192/4 """ def query(field): return(subprocess.check_output( ['nvidia-smi', f'--query-gpu={field}', '--format=csv,nounits,noheader'], encoding='utf-8')) def to_int(result): return int(result.strip().split('\n')[0]) used = to_int(query('memory.used')) total = to_int(query('memory.total')) pct = used/total print('\n' + msg, f'{100*pct:2.1f}% ({used} out of {total})') # ### Running the Training Loop def get_training_obj(params): config = RobertaConfig.from_pretrained(params['model_name'], num_labels=2) model = RobertaForSequenceClassification.from_pretrained(params['model_name'], config=config).to(params['device']) no_decay = ['bias', 'LayerNorm.weight'] gpd_params = [ {'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': params['weight_decay']}, {'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0} ] optimizer = AdamW(gpd_params, lr=params['lr'], eps=params['adam_epsilon']) warmup_steps = int(params['warmup_ratio'] * params['total_steps']) scheduler = WarmupLinearSchedule(optimizer, warmup_steps=warmup_steps, t_total=params['total_steps']) return model, optimizer, scheduler show_gpu('Initial GPU memory usage:') for i in range(2): model, optimizer, scheduler = get_training_obj(params) show_gpu(f'{i}: GPU memory usage after loading training objects:') for epoch in range(1): epoch_start = time.time() model.train() for batch in dp.train_dataloader: xb,mb,_,yb = tuple(t.to(params['device']) for t in batch) outputs = model(input_ids = xb, attention_mask = mb, labels = yb) loss = outputs[0] loss.backward() optimizer.step() scheduler.step() optimizer.zero_grad() show_gpu(f'{i}: GPU memory usage after training model:') del model, optimizer, scheduler, loss, outputs torch.cuda.empty_cache() torch.cuda.synchronize() show_gpu(f'{i}: GPU memory usage after clearing cache:') ```
11-06-2019 06:15:26
11-06-2019 06:15:26
This behavior is expected. `pytorch.cuda.empty_cache()` will free the memory that *can* be freed, think of it as a garbage collector. I assume the ˋmodelˋ variable contains the pretrained model. Since the variable doesn’t get out of scope, the reference to the object in the memory of the GPU still exists and the latter is thus not freed by `empty_cache()`. Try executing ˋdel modelˋ before ˋempty_cache()` to explicitly delete the variable and remove all reference to objects in the GPU’s memory.<|||||>Thank you for taking the time to review and comment! That makes sense, however, doesn't my code (lines 3rd and 4th from end below) execute `del model` before calling `torch.cuda.empty_cache()`? ```python show_gpu('Initial GPU memory usage:') for i in range(2): model, optimizer, scheduler = get_training_obj(params) show_gpu(f'{i}: GPU memory usage after loading training objects:') for epoch in range(1): epoch_start = time.time() model.train() for batch in dp.train_dataloader: xb,mb,_,yb = tuple(t.to(params['device']) for t in batch) outputs = model(input_ids = xb, attention_mask = mb, labels = yb) loss = outputs[0] loss.backward() optimizer.step() scheduler.step() optimizer.zero_grad() show_gpu(f'{i}: GPU memory usage after training model:') del model, optimizer, scheduler, loss, outputs ## <<<<---- HERE torch.cuda.empty_cache() ## <<<<---- AND HERE torch.cuda.synchronize() show_gpu(f'{i}: GPU memory usage after clearing cache:') ```<|||||>(You are correct that the variable `model` contains the pre-trained model.) <|||||>I updated the code to print out GPU usage after loading the batch to GPU and after completing the forward pass for the first 5 batches of each run. It seems the big memory jumps occur during the first and second forward pass and the second loading of the batch to the GPU. Aside from these events, it seems the memory usage is relatively constant across a given training run. ``` Initial GPU memory usage: 0.0% (0 out of 16130) 0: GPU memory usage after loading training objects: 14.7% (2377 out of 16130) 0: GPU memory usage after loading batch 0: 14.7% (2377 out of 16130) 0: GPU memory usage after forward pass 0: 43.6% (7029 out of 16130) 0: GPU memory usage after loading batch 1: 49.1% (7927 out of 16130) 0: GPU memory usage after forward pass 1: 70.0% (11289 out of 16130) 0: GPU memory usage after loading batch 2: 70.6% (11393 out of 16130) 0: GPU memory usage after forward pass 2: 70.6% (11393 out of 16130) 0: GPU memory usage after loading batch 3: 70.6% (11393 out of 16130) 0: GPU memory usage after forward pass 3: 70.6% (11393 out of 16130) 0: GPU memory usage after loading batch 4: 70.6% (11393 out of 16130) 0: GPU memory usage after forward pass 4: 70.6% (11393 out of 16130) 0: GPU memory usage after training model: 70.8% (11415 out of 16130) 0: GPU memory usage after clearing cache: 41.8% (6741 out of 16130) 1: GPU memory usage after loading training objects: 50.2% (8093 out of 16130) 1: GPU memory usage after loading batch 0: 50.2% (8093 out of 16130) 1: GPU memory usage after forward pass 0: 78.6% (12673 out of 16130) 1: GPU memory usage after loading batch 1: 84.3% (13593 out of 16130) Traceback (most recent call last): File "14_OOM_submission.py", line 102, in <module> outputs = model(input_ids = xb, attention_mask = mb, labels = yb) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 328, in forward head_mask=head_mask) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 181, in forward head_mask=head_mask) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 627, in forward head_mask=head_mask) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 348, in forward layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i]) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 328, in forward intermediate_output = self.intermediate(attention_output) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 300, in forward hidden_states = self.intermediate_act_fn(hidden_states) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 128, in gelu return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0))) RuntimeError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 15.75 GiB total capacity; 14.24 GiB already allocated; 8.88 MiB free; 476.01 MiB cached) ```<|||||>Yes I'm sorry, I misread the code the first time. Did you try clearing the last batch as well?<|||||>I did, same result :( <|||||>So if you delete xb, mb, yb you still get the same memory usage after clearing the cache?<|||||>Yes. Just for kicks I also tried deleting xb, mb, yb after every forward pass (i.e., within the batch loop) and also clearing the cache and and confirmed that there is no difference here either.<|||||>Ok, I may have an idea. You can have a look at the tensors that are still tracked by python's garbage collector using something along the lines of: ```python import gc for tracked_object in gc.get_objects(): if torch.is_tensor(tracked_object): print("{} {} {}".format( type(tracked_object).__name__, "GPU" if tracked_object.is_cuda else "" , " pinned" if tracked_object.is_pinned() else "", )) ```<|||||>Thanks! I'm digging into it now... there is a lot that is showing up. Here is the output right after getting the OOM error. [output.txt](https://github.com/huggingface/transformers/files/3814757/output.txt) <|||||>Oops, quick update I just made, the `" pinned" if...` statement part should be a method not an attribute. Updating it produces a different output, showing that none of the objects are pinned. ```python import gc for tracked_object in gc.get_objects(): if torch.is_tensor(tracked_object): print("{} {} {}".format( type(tracked_object).__name__, "GPU" if tracked_object.is_cuda else "" , " pinned" if tracked_object.is_pinned() else "", )) ``` [output.txt](https://github.com/huggingface/transformers/files/3814806/output.txt) <|||||>This is the core of my confusion. Why are their so many objects in memory after I have deleted everything I can think of in my script. I did a quick summary of the output of the code you just provided after running a single training job and deleting the aforementioned objects (`del model, optimizer, scheduler, outputs, loss, xb, mb, yb, batch`) and emptying the cache (`torch.cuda.empty_cache()`): ```python import pandas as pd import gc result = [] for tracked_object in gc.get_objects(): if torch.is_tensor(tracked_object): shape = tracked_object.shape result.append({ 'name': type(tracked_object).__name__, '1d': len(shape)==1, '2d': len(shape)==2, 'nrows': shape[0], 'ncols': shape[1] if (len(shape) > 1) else None, 'gpu': tracked_object.is_cuda, 'pinned': tracked_object.is_pinned() }) d = pd.DataFrame(result) d.groupby('name')['gpu', 'pinned', '1d', '2d'].sum() ``` <img width="300" alt="Screen Shot 2019-11-06 at 10 41 50 PM" src="https://user-images.githubusercontent.com/6405428/68307729-ac373a00-00e6-11ea-81f2-a814a604365b.png"> <|||||>Do you know which objects they correspond to?<|||||>I think they correspond to the parameters and gradients of the `roberta-large` model. (So strange that they are still there after I deleted the model at the end of the training job.) Here is a single layer of the model: ``` (0): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=1024, out_features=1024, bias=True) (key): Linear(in_features=1024, out_features=1024, bias=True) (value): Linear(in_features=1024, out_features=1024, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=1024, out_features=1024, bias=True) (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=1024, out_features=4096, bias=True) ) (output): BertOutput( (dense): Linear(in_features=4096, out_features=1024, bias=True) (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ``` Here is the summary of what's being stored in GPU: <img width="800" alt="Screen Shot 2019-11-06 at 11 46 04 PM" src="https://user-images.githubusercontent.com/6405428/68313497-a560f500-00ef-11ea-95c2-24799b5f9bd4.png"><|||||>I think this person is on to something: https://discuss.pytorch.org/t/releasing-gpu-memory-after-deleting-model/48167 <|||||>Mmm I stared for a really long time at your code, and it seems to be that the memory of the `_` element of the tuple is never freed (token type ids?). I believe that in Python its throwaway meaning is a convention, and the memory is effectively allocated. That’s the only think I can see that has not been freed with your previous attempts.<|||||>Thank you so much for looking into it! The token type ids was a good idea, though it does not appear to impact anything. However, I made some progress. What I discovered: **I am unable to remove the model parameters from GPU when I load the `WarmupLinearSchedule` from `transformers`: e.g.,** ``` from transformers import WarmupLinearSchedule ... scheduler = WarmupLinearSchedule(optimizer, warmup_steps=warmup_steps, t_total=params['total_steps']) ``` However, when I use a standard scheduler (e.g., `StepLR`) I have no issue with deleting the model from the GPU: ``` scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=30, gamma=0.1) ```<|||||>Here is the code for `WarmupLinearSchedule` for reference that your team wrote. Seems pretty reasonable to me, not sure why it would be causing an issue when a standard learning rate scheduler is not (e.g., both [`LambdaLR`](https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#LambdaLR) and [`StepLR`](https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#StepLR) inherit from `_LRScheduler`): ```python class WarmupLinearSchedule(LambdaLR): """ Linear warmup and then linear decay. Linearly increases learning rate from 0 to 1 over `warmup_steps` training steps. Linearly decreases learning rate from 1. to 0. over remaining `t_total - warmup_steps` steps. """ def __init__(self, optimizer, warmup_steps, t_total, last_epoch=-1): self.warmup_steps = warmup_steps self.t_total = t_total super(WarmupLinearSchedule, self).__init__(optimizer, self.lr_lambda, last_epoch=last_epoch) def lr_lambda(self, step): if step < self.warmup_steps: return float(step) / float(max(1, self.warmup_steps)) return max(0.0, float(self.t_total - step) / float(max(1.0, self.t_total - self.warmup_steps))) ```<|||||>@LysandreJik any idea why that might be?<|||||>@LysandreJik thanks for taking a look! To summarise my issue in a minimal example I am attaching screenshots + complete code for a minimal reproducible example that gets at the crux of the issue (w/downloading any data). <img width="1000" alt="Screen Shot 2019-11-08 at 9 08 51 AM" src="https://user-images.githubusercontent.com/6405428/68441083-f49a4900-0207-11ea-8042-708b5b852f4a.png"> <img width="986" alt="Screen Shot 2019-11-08 at 9 09 59 AM" src="https://user-images.githubusercontent.com/6405428/68441125-1c89ac80-0208-11ea-80c7-851dd9b01254.png"> <img width="1040" alt="Screen Shot 2019-11-08 at 9 10 31 AM" src="https://user-images.githubusercontent.com/6405428/68441130-20b5ca00-0208-11ea-8c4a-c7f80ac68c91.png"> Here is the complete code to reproduce these screenshots [code.txt](https://github.com/huggingface/transformers/files/3822500/code.txt). <|||||>Hi, there indeed seems to be an issue with the schedulers. Probably related to #1134<|||||>I don’t see any circular references in your code but a `gc.collect()` after `del` cannot hurt—garbage collection is clearly not what takes the longest here. Let me know if this solves the problem.<|||||>![6FAFB4C3-3046-4557-A89E-CCD3AFA48174](https://user-images.githubusercontent.com/3885044/68512638-3f07de00-0279-11ea-95d5-c3f3ed3b79cf.jpeg) So I instantiated `WarmupLinearSchedule` with `AdamW` and this is what `objgraph` gives me: there is a circular reference that seems to come with our implementation. `gc.collect()` should work as a temporary workaround, if that (I hope) is the issue. Edit: Would you mind adding the following right after `gc.collect()` and giving us the result? ```python for item in gc.garbage: print(item) ``` <|||||>Thank you so much! This order of operations does the trick for me, removing the parameters and gradients from the GPU. 1. delete objects 2. `gc.collect()` 3. `torch.cuda.empty_cache()` Strangely, running your code snippet (`for item in gc.garbage: print(item)`) after deleting the objects (but not calling `gc.collect()` or `empty_cache()`) doesn't print out anything.<|||||>Regarding the aforementioned 1. I ran a couple of quick tests. The only objects I need to delete in order to delete the model parameters and gradients is the optimizer and scheduler. (I do not need to delete the outputs, loss, batch data.) As you can see from the screengrab below the only objects on GPU are xb, mb, yb, ([32,50]) tt ([32]), loss ([]), output ([32,2]) and the parameters and gradients are not there. <img width="681" alt="Screen Shot 2019-11-09 at 10 03 56 AM" src="https://user-images.githubusercontent.com/6405428/68521154-26c7ab80-02d9-11ea-9a52-454e9b1f8721.png"> <|||||>> 1. delete objects > 2. `gc.collect()` > 3. `torch.cuda.empty_cache()` > > Strangely, running your code snippet (`for item in gc.garbage: print(item)`) after deleting the objects (but not calling `gc.collect()` or `empty_cache()`) doesn't print out anything. Sorry, maybe I wasn’t completely clear: you need to run it right *after* garbage collection. Does that completely solve your Out Of Memory issue? Thank you for raising the issue and helping us find the solution :) This is apparently bothering other users, but I am not 100% sure we can *easily* do something about it; Would you mind trying to initialize the scheduler with the following function and tell me if you still have a memory error? (without using `gc.collect()`) ```python def warmup_linear_schedule(optimizer, warmup_steps, t_total, last_epoch=-1): def lr_lambda(step): if step < warmup_steps: return float(step) / float(max(1, warmup_steps)) return max(0.0, float(t_total - step) / float(max(1.0, t_total - warmup_steps))) return LambdaLR(optimizer, lr_lambda, last_epoch=-1) ``` There might be a reference mixup when we subclass `LambdaLR`; using a closure may avoid this.<|||||>Yes, it does solve my issue! I am currently running some training jobs on my GPU (thanks to your help!). I will try out the closure approach once I finish up with those and get back with you on this thread.<|||||>Thanks guys for the great discussion. (leaving a comment so i can access this thread whenever need, sorry for a useless comment)<|||||>(Apologies for the delayed response.) I just tested it out and it worked as expected. No need to call `gc.collect()` now when removing parameters and gradients from GPU. Thank you so much @rlouf, I commend you for your attentiveness to resolving the issue and updating your examples.
transformers
1,741
closed
GPT-2 XL
11-05-2019 18:31:20
11-05-2019 18:31:20
transformers
1,740
closed
Fix CTRL past
Fixes the issue with the re-usable past in CTRL.
11-05-2019 16:28:01
11-05-2019 16:28:01
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=h1) Report > Merging [#1740](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5afca00b4732f57329824e1538897e791e02e894?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1740/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1740 +/- ## ========================================== + Coverage 84.06% 84.06% +<.01% ========================================== Files 105 105 Lines 15536 15537 +1 ========================================== + Hits 13060 13061 +1 Misses 2476 2476 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1740/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.47% <100%> (+0.01%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=footer). Last update [5afca00...8da47b0](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ok great, nice catch and fix!
transformers
1,739
closed
[WIP] Adding Google T5 model
Add Google T5 model: - paper: "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer" https://arxiv.org/abs/1910.10683 - code: https://github.com/google-research/text-to-text-transfer-transformer The original model makes heavy use of model and data parallelism to scale the training up to a 11B parameters model. It is based on Mesh TensorFlow (https://github.com/tensorflow/mesh). We will use a simpler version of model parallelism by simply distributing the layers on several GPUs. # Workflow for including a model from [README.md](https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/README.md) Here an overview of the general workflow: - [x] add model/configuration/tokenization classes - [x] add conversion scripts - [x] add tests - [x] finalize Let's details what should be done at each step ## Adding model/configuration/tokenization classes Here is the workflow for adding model/configuration/tokenization classes: - [x] copy the python files from the present folder to the main folder and rename them, replacing `xxx` with your model name, - [x] edit the files to replace `XXX` (with various casing) with your model name - [x] copy-past or create a simple configuration class for your model in the `configuration_...` file - [x] copy-past or create the code for your model in the `modeling_...` files (PyTorch and TF 2.0) - [x] copy-past or create a tokenizer class for your model in the `tokenization_...` file # Adding conversion scripts Here is the workflow for the conversion scripts: - [x] copy the conversion script (`convert_...`) from the present folder to the main folder. - [x] edit this scipt to convert your original checkpoint weights to the current pytorch ones. # Adding tests: Here is the workflow for the adding tests: - [x] copy the python files from the `tests` sub-folder of the present folder to the `tests` subfolder of the main folder and rename them, replacing `xxx` with your model name, - [x] edit the tests files to replace `XXX` (with various casing) with your model name - [x] edit the tests code as needed # Final steps You can then finish the addition step by adding imports for your classes in the common files: - [x] add import for all the relevant classes in `__init__.py` - [x] add your configuration in `configuration_auto.py` - [x] add your PyTorch and TF 2.0 model respectively in `modeling_auto.py` and `modeling_tf_auto.py` - [x] add your tokenizer in `tokenization_auto.py` - [x] add your models and tokenizer to `pipeline.py` - [x] add a link to your conversion script in the main conversion utility (currently in `__main__` but will be moved to the `commands` subfolder in the near future) - [x] edit the PyTorch to TF 2.0 conversion script to add your model in the `convert_pytorch_checkpoint_to_tf2.py` file - [x] add a mention of your model in the doc: `README.md` and the documentation it-self at `docs/source/pretrained_models.rst`. - [x] upload the pretrained weigths, configurations and vocabulary files.
11-05-2019 15:45:24
11-05-2019 15:45:24
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=h1) Report > Merging [#1739](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7296f1010b6faaf3b1fb409bc5a9ebadcea51973?src=pr&el=desc) will **increase** coverage by `0.58%`. > The diff coverage is `88.79%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1739/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1739 +/- ## ========================================== + Coverage 80.35% 80.93% +0.58% ========================================== Files 114 121 +7 Lines 17095 18329 +1234 ========================================== + Hits 13736 14835 +1099 - Misses 3359 3494 +135 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2VuY29kZXJfZGVjb2Rlci5weQ==) | `32.3% <0%> (ø)` | :arrow_up: | | [transformers/tests/modeling\_tf\_transfo\_xl\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX3RyYW5zZm9feGxfdGVzdC5weQ==) | `94% <100%> (ø)` | :arrow_up: | | [transformers/tests/modeling\_transfo\_xl\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RyYW5zZm9feGxfdGVzdC5weQ==) | `94.54% <100%> (ø)` | :arrow_up: | | [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `97.41% <100%> (+0.49%)` | :arrow_up: | | [transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `95.2% <100%> (+0.06%)` | :arrow_up: | | [transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.3% <100%> (-0.05%)` | :arrow_down: | | [transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `71.29% <100%> (+0.27%)` | :arrow_up: | | [transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbS5weQ==) | `90.47% <100%> (ø)` | :arrow_up: | | [transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `91.46% <100%> (ø)` | :arrow_up: | | [transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `44.18% <25%> (-0.82%)` | :arrow_down: | | ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=footer). Last update [7296f10...cbb368c](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>How much work is there remaining on T5? Tried to check out this branch but it looks like the weights and SentencePiece models aren't uploaded yet.<|||||>I would like to know if this implementation of T5 is on hold because of priority switching to different tasks or maybe there was found an fundamental obstacle, which causes impossibility to move T5 to transformers library and this pull request will not be finished? Thanks for great work!<|||||>No fundamental obstacle, it was just stuck in the second place of my TODO list for a while. This model will need to be fine-tuned on a downstream task before usage though because the combination of mesh-tensorflow ops and bfloat16 seems to cause a bigger discrepancy with PyTorch's float32 ops than other architectures. <|||||>Ok merging<|||||>Good work. Will you be releasing finetuning code soon?
transformers
1,738
closed
BART
# 🌟New model addition ## Model description method for pre-training seq2seq models by de-noising text. BART outperforms previous work on a bunch of generation tasks (summarization/dialogue/QA), while getting similar performance to RoBERTa on SQuAD/GLUE ## Open Source status * [ ] the model implementation is available: not yet * [ ] the model weights are available: not yet * [ ] who are the authors: @yinhanliu @ernamangoyal
11-05-2019 13:57:01
11-05-2019 13:57:01
Duplicate of #1676
transformers
1,737
closed
Documentation: Updating docblocks in optimizers.py
## Summary Updating documentation for the optimizer classes. I explicitly state that the scheduler classes are multiplying the optimizer's learning rate by a changing variable. ## Closes https://github.com/huggingface/transformers/issues/1712
11-05-2019 11:06:38
11-05-2019 11:06:38
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=h1) Report > Merging [#1737](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e99071f10578adb0191288c1f3301e9a758d6200?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1737/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1737 +/- ## ======================================= Coverage 83.93% 83.93% ======================================= Files 94 94 Lines 13951 13951 ======================================= Hits 11710 11710 Misses 2241 2241 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/1737/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL29wdGltaXphdGlvbi5weQ==) | `96.62% <ø> (ø)` | :arrow_up: | | [transformers/tests/modeling\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1737/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2NvbW1vbl90ZXN0LnB5) | `73.08% <0%> (-0.59%)` | :arrow_down: | | [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1737/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (+1.59%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=footer). Last update [e99071f...c958962](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Great, thanks for this!
transformers
1,736
closed
Fix TFXLNet
PR to fix #1692 (type casting attention mask in TF 2.0 version of XLNet) cc @mfuntowicz
11-05-2019 10:27:07
11-05-2019 10:27:07
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=h1) Report > Merging [#1736](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ba973342e3315471a9f44e7465cd245d7bcc5ea2?src=pr&el=desc) will **increase** coverage by `1.39%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1736/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1736 +/- ## ========================================== + Coverage 82.56% 83.95% +1.39% ========================================== Files 94 94 Lines 13951 13951 ========================================== + Hits 11519 11713 +194 + Misses 2432 2238 -194 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `87.82% <100%> (ø)` | :arrow_up: | | [transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.69% <0%> (+1.35%)` | :arrow_up: | | [transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.36% <0%> (+2.27%)` | :arrow_up: | | [transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.28% <0%> (+2.46%)` | :arrow_up: | | [transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.08% <0%> (+12.65%)` | :arrow_up: | | [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (+17.02%)` | :arrow_up: | | [transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (+83.09%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=footer). Last update [ba97334...dfb61ca](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
1,735
closed
Do not use GPU when importing transformers
This PR addresses the issue #1507 and the main point is to **not** use GPU already at import time of transformers. This was caused by the use of class variable `dummy_inputs` that initializes TF by using `tf.constant`. This can be prevented simply by makeing `dummy inputs` a local variable since is not used anywhere else but in `__init__`. commit messages: * Make dummy inputs a local variable in TFPreTrainedModel.
11-05-2019 10:23:09
11-05-2019 10:23:09
Well this is actually used in other places, in `modeling_tf_pytorch_utils` for instance, and designed to be overriden by model class-specific inputs, for instance in `modeling_xlm` (hence all the CircleCI errors). Maybe making this a python property instead of an attribute would solve your issue as well?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=h1) Report > Merging [#1735](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ba973342e3315471a9f44e7465cd245d7bcc5ea2?src=pr&el=desc) will **increase** coverage by `1.39%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1735/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1735 +/- ## ========================================== + Coverage 82.56% 83.95% +1.39% ========================================== Files 94 94 Lines 13951 13952 +1 ========================================== + Hits 11519 11714 +195 + Misses 2432 2238 -194 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.45% <100%> (+0.04%)` | :arrow_up: | | [transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.69% <0%> (+1.35%)` | :arrow_up: | | [transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.36% <0%> (+2.27%)` | :arrow_up: | | [transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.28% <0%> (+2.46%)` | :arrow_up: | | [transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.08% <0%> (+12.65%)` | :arrow_up: | | [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (+17.02%)` | :arrow_up: | | [transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (+83.09%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=footer). Last update [ba97334...124409d](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>When will this pull request be merged and released?<|||||>Indeed, LGTM, let's merge
transformers
1,734
closed
add progress bar to convert_examples_to_features
It takes considerate amount of time (~10 min) to parse the examples to features, it is good to have a progress-bar to track this
11-05-2019 08:34:36
11-05-2019 08:34:36
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=h1) Report > Merging [#1734](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2e2577dd31ab92f21154e5280d64fa0ae90bbb8?src=pr&el=desc) will **increase** coverage by `0.28%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1734/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1734 +/- ## ========================================== + Coverage 83.67% 83.95% +0.28% ========================================== Files 94 94 Lines 13950 13951 +1 ========================================== + Hits 11673 11713 +40 + Misses 2277 2238 -39 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl90cmFuc2ZvX3hsLnB5) | `33.8% <0%> (-0.1%)` | :arrow_down: | | [transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9ncHQyLnB5) | `96.72% <0%> (+0.81%)` | :arrow_up: | | [transformers/tests/modeling\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2NvbW1vbl90ZXN0LnB5) | `73.67% <0%> (+1.37%)` | :arrow_up: | | [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (+3.72%)` | :arrow_up: | | [transformers/tests/tokenization\_utils\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl91dGlsc190ZXN0LnB5) | `96.15% <0%> (+3.84%)` | :arrow_up: | | [transformers/tests/tokenization\_tests\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <0%> (+3.92%)` | :arrow_up: | | [transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `74.17% <0%> (+4.39%)` | :arrow_up: | | [transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl94bG5ldC5weQ==) | `90.24% <0%> (+6.5%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=footer). Last update [d2e2577...d790616](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Yes, nice indeed, thanks
transformers
1,733
closed
🌟New model addition: VL-BERT
# 🌟New model addition ## Model description [VL-BERT: PRE-TRAINING OF GENERIC VISUALLINGUISTIC REPRESENTATIONS](https://arxiv.org/pdf/1908.08530.pdf) > *We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone, and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the visual-linguistic downstream tasks. To better exploit the generic representation, we pre-train VL-BERT on the massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues and benefit the downstream tasks, such as visual commonsense reasoning, visual question answering and referring expression comprehension. It is worth noting that VL-BERT achieved the first place of single model on the leaderboard of the VCR benchmark.* <!-- Important information --> ## Open Source status * [x] the model implementation is available: https://github.com/jackroos/VL-BERT * [x] the model weights are available: https://github.com/jackroos/VL-BERT/blob/master/model/pretrained_model/PREPARE_PRETRAINED_MODELS.md ## Additional context The pre-trainable generic representation for visual-linguistic tasks is becoming more and more important. <!-- Add any other context about the problem here. -->
11-05-2019 03:04:19
11-05-2019 03:04:19
We have released the code for VL-BERT: https://github.com/jackroos/VL-BERT. Thanks for your attention!<|||||>@jackroos Awesome! Thanks for your work!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@thomwolf @srush Any plans to integrate any visual BERT kind of model (like VilBERT, LXMERT or VL-BERT) in the near future? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,732
closed
TFBertForSequenceClassification.from_pretrained ERROR
### code: model = TFBertForSequenceClassification.from_pretrained('./bert-base-cased-tf_model.h5') ### Error message: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/DATA/disk1/wutianlong/python/miniconda3/lib/python3.6/site-packages/transformers/modeling_tf_utils.py", line 212, in from_pretrained **kwargs File "/DATA/disk1/wutianlong/python/miniconda3/lib/python3.6/site-packages/transformers/configuration_utils.py", line 154, in from_pretrained config = cls.from_json_file(resolved_config_file) File "/DATA/disk1/wutianlong/python/miniconda3/lib/python3.6/site-packages/transformers/configuration_utils.py", line 186, in from_json_file text = reader.read() File "/DATA/disk1/wutianlong/python/miniconda3/lib/python3.6/codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte ### Environment py:3.6 tf:2.0.0 ### I haved download "bert-base-cased-tf_model.h5" "bert-base-cased-vocab.txt" "bert-base-cased-config.json" in the same dir. when I load model "from_pretrained", I got error above. Please help me what 's wrong and how to fix it. Thanks!
11-05-2019 03:00:03
11-05-2019 03:00:03
Just give the path to your folder instead of the model so the loading method can also find the configuration files. You can read more [here](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained)<|||||>@thomwolf Thank you for your help! I can load the pretrained model now.
transformers
1,731
closed
Threads running on evaluation?
Hi Transformers :) Thanks for the wonderful repo. One quick question. I was running examples/test_examples.py and came to see logs printing multiple times while running in cpu. Is there threads spawned ? snippet from examples/test_examples.py ** testargs = ["run_squad.py", "--train_file=./tests_samples/SQUAD/dev-v2.0-small.json", "--predict_file=./tests_samples/SQUAD/dev-v2.0-small.json", "--model_name=bert-base-uncased", "--output_dir=./tests_samples/temp_dir", "--max_steps=10", "--warmup_steps=2", "--do_train", "--do_eval", "--version_2_with_negative", "--learning_rate=2e-4", "--per_gpu_train_batch_size=2", "--per_gpu_eval_batch_size=1", "--overwrite_output_dir", "--seed=42"] *** Result where i see multiple logs prints ----- 11/05/2019 07:11:33 - INFO - utils_squad - *** Example *** *** Example *** *** Example *** 11/05/2019 07:11:33 - INFO - utils_squad - unique_id: 1000000013 unique_id: 1000000013 unique_id: 1000000013 11/05/2019 07:11:33 - INFO - utils_squad - example_index: 13 example_index: 13 example_index: 13 11/05/2019 07:11:33 - INFO - utils_squad - doc_span_index: 0 doc_span_index: 0 doc_span_index: 0
11-05-2019 02:18:15
11-05-2019 02:18:15
*attaching complete log [test_log.txt](https://github.com/huggingface/transformers/files/3806959/test_log.txt) <|||||>Sorry. guys. logger has been called multiple times hence the issue closing the issue.
transformers
1,730
closed
resize_token_embeddings doesn't work as expected for BertForMaskedLM
## 🐛 Bug <!-- Important information --> Model I am using : **BertForMaskedLM** Language I am using the model on (English, Chinese....): **multilingual** The problem arise when using: * [x] the official example scripts: **BertForMaskedLM** The tasks I am working on is: * [x] my own task or dataset: **Finetuning with newly added tokens**. ## To Reproduce Steps to reproduce the behavior: ```python # Get model model = BertForMaskedLM.from_pretrained("bert-base-multilingual-cased") # define input input = torch.tensor([[1,2,3,4,5]]) # before experiment print(f"BERT num embeddings before\t: {model.bert.embeddings.word_embeddings.num_embeddings}") print(f"LM Decoder num embedding before\t: {model.cls.predictions.decoder.out_features}") print(f"MLM Loss\t\t\t: {model(input_ids=input, masked_lm_labels=input)[0]}") # change embedding size for experiment model.resize_token_embeddings(119547 + 5) print(f"\nBERT num embeddings after\t: {model.bert.embeddings.word_embeddings.num_embeddings}") print(f"LM Decoder num embedding after\t: {model.cls.predictions.decoder.out_features}") # print failuire try: print(f"MLM Loss\t\t\t: {model(input_ids=input, masked_lm_labels=input)[0]}") except RuntimeError: print(" ---- Forward Pass failed ---- ") ``` This is the output that I see ``` BERT num embeddings before : 119547 LM Decoder num embedding before : 119547 MLM Loss : 18.269268035888672 BERT num embeddings after : 119552 LM Decoder num embedding after : 119547 ---- Forward Pass failed ---- ``` If you see the LM decoder did not change the embedding size, hence won't ever predict the new tokens. The expectation would be that `resize_token_embeddings` handles the FC inside LM decoder. I tried also using BertConfig ```python config = BertConfig.from_pretrained("bert-base-multilingual-cased") config.vocab_size = 119547 + 5 model = BertForMaskedLM.from_pretrained("bert-base-multilingual-cased", config=config) ``` Which naturally results in error while loading `state_dict` ``` RuntimeError: Error(s) in loading state_dict for BertForMaskedLM: size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([119547, 768]) from checkpoint, the shape in current model is torch.Size([119554, 768]). size mismatch for cls.predictions.bias: copying a param with shape torch.Size([119547]) from checkpoint, the shape in current model is torch.Size([119554]). size mismatch for cls.predictions.decoder.weight: copying a param with shape torch.Size([119547, 768]) from checkpoint, the shape in current model is torch.Size([119554, 768]). ```
11-05-2019 00:51:24
11-05-2019 00:51:24
Hi! May I know what version are you using? On the current master branch, this is the result: ```py import torch from transformers import BertForMaskedLM input_ids = torch.tensor([[1,2,3,4,5]]) print(model.bert.embeddings.word_embeddings.num_embeddings) # 119547 print(model.cls.predictions.decoder.out_features) # 119547 model.resize_token_embeddings(119547 + 5) print(model.bert.embeddings.word_embeddings.num_embeddings) # 119552 print(model.cls.predictions.decoder.out_features) # 119552 ```<|||||>Hmm, I am also on 2.1.1. Looks like in your example you're not loading the model from a pretrained model. (Though I am not sure how that makes a difference, but looks like it does) ```python model = BertForMaskedLM.from_pretrained(model_type, cache_dir="/tmp/bert_models") print(transformers.__version__) #2.1.1 print(model.bert.embeddings.word_embeddings.num_embeddings) # 119547 print(model.cls.predictions.decoder.out_features) # 119547 model.resize_token_embeddings(119547 + 5) print(model.bert.embeddings.word_embeddings.num_embeddings) # 119552 print(model.cls.predictions.decoder.out_features) # 119547 ``` And as per the current master also I don't see where we copy over the params from the old `cls` to the new updated `cls` with the new number of tokens as we do it for the embedding layer [here](https://github.com/huggingface/transformers/blob/de890ae67d43e1e5d031a815dab5dfed081e9a95/transformers/modeling_utils.py#L196). In the next comment, I can share the temporary hack that I have.<|||||>```python class CustomBertForMaskedLM(BertForMaskedLM): def __init__(self, config): super().__init__(config) def resize_embedding_and_fc(self, new_num_tokens): # Change the FC old_fc = self.cls.predictions.decoder self.cls.predictions.decoder = self._get_resized_fc(old_fc, new_num_tokens) # Change the bias old_bias = self.cls.predictions.bias self.cls.predictions.bias = self._get_resized_bias(old_bias, new_num_tokens) # Change the embedding self.resize_token_embeddings(new_num_tokens) def _get_resized_bias(self, old_bias, new_num_tokens): old_num_tokens = old_bias.data.size()[0] if old_num_tokens == new_num_tokens: return old_bias # Create new biases new_bias = nn.Parameter(torch.zeros(new_num_tokens)) new_bias.to(old_bias.device) # Copy from the previous weights num_tokens_to_copy = min(old_num_tokens, new_num_tokens) new_bias.data[:num_tokens_to_copy] = old_bias.data[:num_tokens_to_copy] return new_bias def _get_resized_fc(self, old_fc, new_num_tokens): old_num_tokens, old_embedding_dim = old_fc.weight.size() if old_num_tokens == new_num_tokens: return old_fc # Create new weights new_fc = nn.Linear(in_features=old_embedding_dim, out_features=new_num_tokens) new_fc.to(old_fc.weight.device) # initialize all weights (in particular added tokens) self._init_weights(new_fc) # Copy from the previous weights num_tokens_to_copy = min(old_num_tokens, new_num_tokens) new_fc.weight.data[:num_tokens_to_copy, :] = old_fc.weight.data[:num_tokens_to_copy, :] return new_fc ``` Using this, I can achieve the output as expected. ![image](https://user-images.githubusercontent.com/7589415/68256992-a3f7d580-ffe6-11e9-9496-5624152b85c2.png) <|||||>Indeed I had forgotten the line about the model initialization, but it is initialized from a pretrained checkpoint as you have done. Running your code gives me the same results: ```py from transformers import BertForMaskedLM import torch model = BertForMaskedLM.from_pretrained("bert-base-cased") print(model.bert.embeddings.word_embeddings.num_embeddings) # 28996 print(model.cls.predictions.decoder.out_features) # 28996 model.resize_token_embeddings(119547 + 5) print(model.bert.embeddings.word_embeddings.num_embeddings) # 119552 print(model.cls.predictions.decoder.out_features) # 119552 ``` but please note I am running on the **master branch**, not on the official pypi release. In the official release the predictions layer indeed isn't correctly tied, but it has been fixed since then with #1721. You can install from master with: `pip install git+https://github.com/huggingface/transformers`. As for where the tying happens, it is [in the superclass `PreTrainedModel`](https://github.com/huggingface/transformers/blob/master/transformers/modeling_utils.py#L114) which ties the input embeddings and output embeddings if there are any.<|||||>@LysandreJik This issue is only partially fixed. In _tie_or_clone_weights(), it tries to fix output_embeddings.bias, but output_embeddings is actually self.cls.predictions.decoder, which has no bias term. Instead, self.cls.predictions has a bias term. This bias term is not fixed, and exceptions will occur. I'll have to use @praateekmahajan 's hack temporarily 😃 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This is still an issue, I'm reopening it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@LysandreJik do you know if its `wontfix` or just in your backlog. Lmk if I can help somehow. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,729
closed
Roberta embeddings comparison
## ❓ Questions & Help **SYSTEM** OS: Linux pop-os 5.0.0 Python version: 3.6.8 Torch version: 1.3.0 Transformers version: 2.1.1 I am running this linux VM with the above software versions on a Windows 10 laptop. I wanted to compare the cosine similarity between elmo and roberta embeddings for two sequences: ```python from allennlp.commands.elmo import ElmoEmbedder from allennlp.data.tokenizers.spacy_tokenizer import SpacyTokenizer from scipy.spatial.distance import cdist import numpy as np import pandas as pd from transformers import RobertaTokenizer, RobertaModel import torch seqA = 'How many cars are in the target parking lot' seqB = 'How many countries are in the continent of Africa' comparison_df = pd.DataFrame() sim_list = [] model_types = ['elmo', 'roberta'] for model_type in model_types: sim_list = [] if model_type == 'elmo': tokenizer = SpacyTokenizer() model = ElmoEmbedder() seqA_tokens = np.array([s.text for s in tokenizer.tokenize(seqA)]) seqB_tokens = np.array([s.text for s in tokenizer.tokenize(seqB)]) seqA_embeddings = model.embed_sentence(seqA_tokens) seqB_embeddings = model.embed_sentence(seqB_tokens) seqA_embeds = np.sum(seqA_embeddings[-1], axis=0).reshape((1, seqA_embeddings.shape[2])) seqB_embeds = np.sum(seqB_embeddings[-1], axis=0).reshape((1, seqB_embeddings.shape[2])) Y = cdist(seqA_embeds, seqB_embeds, 'cosine')[0][0] cos_sim = 1.0 - Y sim_list.append(cos_sim) elif model_type == 'roberta': tokenizer = RobertaTokenizer.from_pretrained('roberta-large') model = RobertaModel.from_pretrained('roberta-large') seqA_tokens = torch.tensor(tokenizer.encode(seqA, add_special_tokens=True)).unsqueeze(0) seqB_tokens = torch.tensor(tokenizer.encode(seqB, add_special_tokens=True)).unsqueeze(0) with torch.no_grad(): seqA_embeddings = model(seqA_tokens)[0] seqB_embeddings = model(seqB_tokens)[0] seqA_embeddings = seqA_embeddings.detach().numpy() seqA_embeddings = np.sum(seqA_embeddings[-1], axis=0).reshape((1, seqA_embeddings.shape[2])) seqB_embeddings = seqB_embeddings.detach().numpy() seqB_embeddings = np.sum(seqB_embeddings[-1], axis=0).reshape((1, seqB_embeddings.shape[2])) Y = cdist(seqA_embeddings, seqB_embeddings, 'cosine')[0][0] cos_sim = 1.0 - Y sim_list.append(cos_sim) comparison_df[model_type] = sim_list print(comparison_df) ``` The result of the print statement is: ``` elmo roberta 0 0.55092 0.996526 ``` I find this strange because, despite having the same number of tokens, I would expect the similarity between the two sequences to not be so close to 1 as roberta is showing. My concern is that I am not accessing the embeddings correctly or not pooling correctly (or both), and was wondering if people here wouldn't mind letting me know what I am doing wrong? Thanks in advance!
11-04-2019 22:56:55
11-04-2019 22:56:55
Any update on this issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> ## ❓ Questions & Help > **SYSTEM** > OS: Linux pop-os 5.0.0 > Python version: 3.6.8 > Torch version: 1.3.0 > Transformers version: 2.1.1 > I am running this linux VM with the above software versions on a Windows 10 laptop. > > I wanted to compare the cosine similarity between elmo and roberta embeddings for two sequences: > > ```python > from allennlp.commands.elmo import ElmoEmbedder > from allennlp.data.tokenizers.spacy_tokenizer import SpacyTokenizer > from scipy.spatial.distance import cdist > import numpy as np > import pandas as pd > from transformers import RobertaTokenizer, RobertaModel > import torch > > seqA = 'How many cars are in the target parking lot' > seqB = 'How many countries are in the continent of Africa' > comparison_df = pd.DataFrame() > sim_list = [] > > model_types = ['elmo', 'roberta'] > for model_type in model_types: > sim_list = [] > if model_type == 'elmo': > tokenizer = SpacyTokenizer() > model = ElmoEmbedder() > seqA_tokens = np.array([s.text for s in tokenizer.tokenize(seqA)]) > seqB_tokens = np.array([s.text for s in tokenizer.tokenize(seqB)]) > > seqA_embeddings = model.embed_sentence(seqA_tokens) > seqB_embeddings = model.embed_sentence(seqB_tokens) > > seqA_embeds = np.sum(seqA_embeddings[-1], axis=0).reshape((1, seqA_embeddings.shape[2])) > seqB_embeds = np.sum(seqB_embeddings[-1], axis=0).reshape((1, seqB_embeddings.shape[2])) > Y = cdist(seqA_embeds, seqB_embeds, 'cosine')[0][0] > cos_sim = 1.0 - Y > sim_list.append(cos_sim) > > > elif model_type == 'roberta': > tokenizer = RobertaTokenizer.from_pretrained('roberta-large') > model = RobertaModel.from_pretrained('roberta-large') > seqA_tokens = torch.tensor(tokenizer.encode(seqA, add_special_tokens=True)).unsqueeze(0) > seqB_tokens = torch.tensor(tokenizer.encode(seqB, add_special_tokens=True)).unsqueeze(0) > > with torch.no_grad(): > seqA_embeddings = model(seqA_tokens)[0] > seqB_embeddings = model(seqB_tokens)[0] > > seqA_embeddings = seqA_embeddings.detach().numpy() > seqA_embeddings = np.sum(seqA_embeddings[-1], axis=0).reshape((1, seqA_embeddings.shape[2])) > > seqB_embeddings = seqB_embeddings.detach().numpy() > seqB_embeddings = np.sum(seqB_embeddings[-1], axis=0).reshape((1, seqB_embeddings.shape[2])) > > Y = cdist(seqA_embeddings, seqB_embeddings, 'cosine')[0][0] > cos_sim = 1.0 - Y > sim_list.append(cos_sim) > > comparison_df[model_type] = sim_list > print(comparison_df) > ``` > > The result of the print statement is: > > ``` > elmo roberta > 0 0.55092 0.996526 > ``` > > I find this strange because, despite having the same number of tokens, I would expect the similarity between the two sequences to not be so close to 1 as roberta is showing. My concern is that I am not accessing the embeddings correctly or not pooling correctly (or both), and was wondering if people here wouldn't mind letting me know what I am doing wrong? Thanks in advance! @aclifton314 have you deduce any meaningful intuition on this thus far ?? I found another similar issue, hopefully this can help: https://github.com/huggingface/transformers/issues/2298 ...<|||||>@aclifton314 In addition to the above link, I also ran a quick test with most popular models not just Roberta, it seems there are some models that can differentiate better, though I am not sure if you can generalize that statement without the test. Code and test results below: ``` import torch from torch import nn from transformers import * import numpy as np from scipy.spatial import distance from sklearn.metrics.pairwise import cosine_similarity # for 10 transformer architectures and 30 pretrained weights. # Model | Tokenizer | Pretrained weights shortcut MODELS = [(BertModel, BertTokenizer, 'bert-base-uncased'), (OpenAIGPTModel, OpenAIGPTTokenizer, 'openai-gpt'), (GPT2Model, GPT2Tokenizer, 'gpt2'), (CTRLModel, CTRLTokenizer, 'ctrl'), (TransfoXLModel, TransfoXLTokenizer, 'transfo-xl-wt103'), (XLNetModel, XLNetTokenizer, 'xlnet-base-cased'), (XLMModel, XLMTokenizer, 'xlm-mlm-enfr-1024'), (DistilBertModel, DistilBertTokenizer, 'distilbert-base-uncased'), (RobertaModel, RobertaTokenizer, 'roberta-base'), (XLMRobertaModel, XLMRobertaTokenizer, 'xlm-roberta-base'), ] for model_class, tokenizer_class, pretrained_weights in MODELS: tokenizer = tokenizer_class.from_pretrained(pretrained_weights) model = model_class.from_pretrained(pretrained_weights) print ("") print ("Model", ''.join(e for e in str(type(model)).split(".")[-1] if e.isalnum())) input_ids1 = torch.tensor([tokenizer.encode("How many cars are in the target parking lot", add_special_tokens=True)]) input_ids2 = torch.tensor([tokenizer.encode("How many countries are in the continent of Africa", add_special_tokens=True)]) with torch.no_grad(): last_hidden_states1 = model(input_ids1)[0].detach().numpy() last_hidden_states2 = model(input_ids2)[0].detach().numpy() last_hidden_states1_sq = np.sum(last_hidden_states1[-1], axis=0).reshape((1, last_hidden_states1.shape[2])) last_hidden_states2_sq = np.sum(last_hidden_states2[-1], axis=0).reshape((1, last_hidden_states2.shape[2])) print ("Similarity between Sent1 & Sent 2", cosine_similarity(last_hidden_states1_sq, last_hidden_states2_sq) ) ``` **Results**: Model BertModel Similarity between Sent1 & Sent 2 [[0.7234119]] Model OpenAIGPTModel Similarity between Sent1 & Sent 2 [[0.6779459]] Model GPT2Model Similarity between Sent1 & Sent 2 [[0.9986031]] Model CTRLModel Similarity between Sent1 & Sent 2 [[0.9086089]] Model TransfoXLModel Similarity between Sent1 & Sent 2 [[0.77547705]] Model XLNetModel Similarity between Sent1 & Sent 2 [[0.98956037]] Model XLMModel Similarity between Sent1 & Sent 2 [[0.623127]] Model DistilBertModel Similarity between Sent1 & Sent 2 [[0.7640958]] Model RobertaModel Similarity between Sent1 & Sent 2 [[0.96598077]] Model XLMRobertaModel Similarity between Sent1 & Sent 2 [[0.99813896]] **I am interested in Passage (Few Paragraphs) Level Embeddings based Similarity Comparison, if anyone has any viable solution, suggestions please do share.** Thanks!!
transformers
1,728
closed
glue_convert_examples_to_features not working if no task is provided
## 🐛 Bug <!-- Important information --> The problem arise when using: * [X] my own modified scripts: (give details) The tasks I am working on is: * [X] my own task or dataset: (give details) In these lines: https://github.com/huggingface/transformers/blob/04c69db399b2ab9e3af872ce46730fbd9f17aec3/transformers/data/processors/glue.py#L66-L69 If no task is provided (even if it's a optional parameter), no processor is selected, label_list is still none. At least an error should be thrown.
11-04-2019 21:36:24
11-04-2019 21:36:24
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,727
closed
loss is nan, for training on MNLI dataset
## 🐛 Bug Model I am using (Bert, XLNet....): ** Bert ** bert_model = TFBertForSequenceClassification.from_pretrained("bert-base-cased") Language I am using the model on (English, Chinese....): ** English ** The problem arise when using: https://medium.com/tensorflow/using-tensorflow-2-for-state-of-the-art-natural-language-processing-102445cda54a for MNLI The tasks I am working on is: * [ ] an official GLUE/SQUaD task: ** MNLI ** ## To Reproduce Steps to reproduce the behavior: 1. create bert_validation_matched_dataset example = list(bert_validation_matched_dataset.__iter__())[0] example ({'attention_mask': <tf.Tensor: id=5022668, shape=(64, 128), dtype=int32, numpy= array([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]], dtype=int32)>, 'input_ids': <tf.Tensor: id=5022669, shape=(64, 128), dtype=int32, numpy= array([[ 101, 8147, 1218, ..., 0, 0, 0], [ 101, 21637, 1103, ..., 0, 0, 0], [ 101, 1109, 2570, ..., 0, 0, 0], ..., [ 101, 1109, 7271, ..., 0, 0, 0], [ 101, 7947, 1685, ..., 0, 0, 0], [ 101, 1130, 1103, ..., 0, 0, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: id=5022670, shape=(64, 128), dtype=int32, numpy= array([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], dtype=int32)>}, <tf.Tensor: id=5022671, shape=(64,), dtype=int64, numpy= array([2, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 2, 0, 2, 0, 1, 2, 1, 1, 2, 2, 1, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 1, 0, 0, 2, 0, 2, 2, 1, 2, 2, 2, 2, 2, 2, 0, 2, 0, 0, 2, 1, 2, 2, 1, 2, 0])>) 2. create bert_train_dataset such that example = list(bert_train_dataset.__iter__())[0] example ({'attention_mask': <tf.Tensor: id=5023289, shape=(32, 128), dtype=int32, numpy= array([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]], dtype=int32)>, 'input_ids': <tf.Tensor: id=5023290, shape=(32, 128), dtype=int32, numpy= array([[ 101, 144, 2312, ..., 0, 0, 0], [ 101, 107, 1192, ..., 0, 0, 0], [ 101, 149, 10844, ..., 0, 0, 0], ..., [ 101, 1109, 2350, ..., 0, 0, 0], [ 101, 1262, 1173, ..., 0, 0, 0], [ 101, 1105, 1128, ..., 0, 0, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: id=5023291, shape=(32, 128), dtype=int32, numpy= array([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], dtype=int32)>}, <tf.Tensor: id=5023292, shape=(32,), dtype=int64, numpy= array([2, 1, 0, 0, 1, 2, 1, 1, 0, 2, 0, 1, 1, 2, 2, 2, 0, 2, 1, 1, 0, 1, 1, 2, 0, 1, 1, 0, 2, 2, 2, 1])>) 3. Fit the model bert_history = bert_model.fit(bert_train_dataset, epochs=1, validation_data=bert_validation_matched_dataset) <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior The expected behavior is that loss should not be nan. Fine-tuning BERT on MNLI 3/Unknown - 4s 1s/step - loss: nan - accuracy: 0.2031 ## Environment colab
11-04-2019 21:26:16
11-04-2019 21:26:16
Hi, there seems to be a problem indeed. Do you mind sharing a link to a notebook so that I can see what's wrong?<|||||>Is it related to the issue you opened 3 days ago #1704?<|||||>Yes, its is the same, but I though that was my mistake and closed it. I will provide a link to a notebook to see the code. Here it is: https://colab.research.google.com/drive/1WlhgpgERppzZ5HK6E_iappn_Kap9EKOA<|||||>Hi, is there any progress on that?<|||||>Hi, I believe the issue stems from the fact that MNLI has three labels, whereas MRPC only has two. Your model must therefore be instantiated differently: ```py from transformers import TFBertForSequenceClassification, BertConfig config = BertConfig.from_pretrained("bert-base-cased", num_labels=3) bert_model = TFBertForSequenceClassification.from_pretrained("bert-base-cased", config=config) ``` You can see how the configurations and models interact together in the [quickstart part of our documentation](https://huggingface.co/transformers/quickstart.html). Your model should have a correct loss after specifying the number of labels. We've fixed an issue with the way the `tensorflow_datasets` datasets were handled recently, so I recommend installing directly from source: ``` !pip install git+https://github.com/huggingface/transformers ``` You won't need to specify the `label_list` like you have done in your notebook if you install from the master branch. If you want a fully functional script that works will all glue tasks, I recommend taking a look at [examples/run_tf_glue.py](https://github.com/huggingface/transformers/blob/master/examples/run_tf_glue.py) <|||||>Thank you very much! This is very helpful. I see that in the script is an if ``` if TASK == "mrpc": ``` (only for mrpc). Also Is there a similar script for pytorch as well? Seems that this issue is ready for closing.<|||||>Sure, no problem! That flag is because we're then loading the saved model in pytorch to show how easy it is to switch between pytorch/tensorflow. We're showcasing the predictions made by the PyTorch model that was trained in TensorFlow, but we're only showcasing it for MRPC. We'll update this script sometimes in the next few weeks to evaluate all the tasks.<|||||>Following your recommendations, now the loss takes value. For your info, there is an issue with roberta. But for that could be opened a new issue, if it is not a fault from my side. ``` Fine-tuning RoBERTa on MNLI WARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification/roberta/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification/roberta/pooler/dense/bias:0'] when minimizing the loss <|||||>Thanks for letting us know, we'll look into that!<|||||>Any updates on this one? Still getting the same error when running the `2.2.2` release. ``` WARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification/roberta/pooler/dense/bias:0'] when minimizing the loss ````<|||||>It's not an error, it just means it's not updating those variables. Those variables (pooler) are not used when doing sequence classification.<|||||>Those weights are part of `trainable_weights` which makes little sense if they are not used when doing sequence classification. Maybe just remove them from trainable weights and avoid any confusion in the future?<|||||>> It's not an error, it just means it's not updating those variables. Those variables (pooler) are not used when doing sequence classification. > Those weights are part of `trainable_weights` which makes little sense if they are not used when doing sequence classification. Maybe just remove them from trainable weights and avoid any confusion in the future? We are getting a similar issue here https://github.com/huggingface/transformers/issues/2256 We are using just the Bert base model to get a vector representation from a piece of text . Since > **pooler_output**: ``tf.Tensor`` of shape ``(batch_size, hidden_size)`` Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during Bert pretraining. This output is usually *not* a good summary of the semantic content of the input, you're often better with averaging or pooling the sequence of hidden-states for the whole input sequence. That would mean that the warning I am getting is just because we are not using the pooler output?<|||||>I'm getting similar warning on roberta large architecture training on TPU for sequence classification for num_labal = 4: It is mostly the same code as except for data preparation phase and num of labels and the model is roberta-large: https://colab.research.google.com/drive/1yWaLpCWImXZE2fPV0ZYDdWWI8f52__9A#scrollTo=G_chN1Sy3IXn I defined my model as : `assert len(label_list) == 4` `def model_fn():` ` return TFRobertaForSequenceClassification.from_pretrained("roberta-large",config =RobertaConfig.from_pretrained("roberta-large",num_labels=len(label_list) ))` The warning i get are : WARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification_2/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification_2/roberta/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification_2/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification_2/roberta/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification_2/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification_2/roberta/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification_2/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification_2/roberta/pooler/dense/bias:0'] when minimizing the loss. I using : - Tensorflow 2.1.0rc1 - Tranformers but with checkout : git checkout f3386 -b tanda-sequential-finetuning I don't find any diference on models between this checkout and the tranfermers version that exist right now. Altough the training is decreasing , for exemple: Iteration 20: Training: 20it [02:28, 1.46it/s] Training step 20 Accuracy: 0.901562511920929, Training loss: 0.053004395216703415 Iteration 2040: Training: 2040it [20:16, 1.89it/s] Training step 2040 Accuracy: 0.953576922416687, Training loss: 0.02367759868502617<|||||>i have the same issue. But since i'm using custom loss and need to pass mask from inputs into the loss, i'm not using eager execution. The problem in this case is that model does not train and throw an error `ValueError: Variable <tf.Variable 'tf_bert_model/bert/pooler/dense/kernel:0' shape=(768, 768) dtype=float32> has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.`. If i use simple standard loss with eager execution, i get only warnings and training is possible. I'm training the model using `model.fit()`<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,726
closed
Exceeding max sequence length in Roberta
## ❓ Questions & Help **SYSTEM** OS: Linux pop-os 5.0.0 Python version: 3.6.8 Torch version: 1.3.0 Transformers version: 2.1.1 I am running this linux VM with the above software versions on a Windows 10 laptop. <!-- A clear and concise description of the question. --> I am interested in comparing the embeddings for two pieces of text from the various models in hf-transformers using some metric (like cosine similarity). I am running into an issue where one piece of text is exceeding the max sequence length for roberta (899 > 512). I was wondering what the best work around for this would be? ```python from transformers import RobertaTokenizer, RobertaModel import torch tokenizer = RobertaTokenizer.from_pretrained('roberta-large') model = RobertaModel.from_pretrained('roberta-large') seqA = 'Five college students take time off to spend a peaceful vacation in a remote cabin. A book and audio tape is discovered, and its evil is found to be powerful once the incantations are read out loud. The friends find themselves helpless to stop the evil as it takes them one by one, with only one survivor left with the evil dead and desperately tries to fight to live until morning.' seqA_tokens = torch.tensor(tokenizer.encode(seqA)).unsqueeze(0) seqA_embeddings = model(seqA_tokens)[0] seqB = 'Five Michigan State University students—Ash Williams, his girlfriend, Linda; Ashs sister, Cheryl; their friend Scott; and his girlfriend Shelly—vacation at an isolated cabin in rural Tennessee. Approaching the cabin, the group notices the porch swing move on its own but suddenly stop as Scott grabs the doorknob. While Cheryl draws a picture of a clock, the clock stops, and she hears a faint, demonic voice tell her to "Join us". Her hand becomes possessed, turns pale and draws a picture of a book with a demonic face on its cover. Although shaken, she does not mention the incident. When the cellar trapdoor flies open during dinner, Shelly, Linda, and Cheryl remain upstairs as Ash and Scott investigate. They find the Naturan Demanto, a Sumerian version of the Egyptian Book of the Dead, along with an archaeologists tape recorder. Scott and Ash joke around with the items and take them upstairs. Scott plays a tape of incantations that resurrect a demonic entity. Cheryl yells for Scott to turn off the tape recorder, and a tree branch breaks one of the cabins windows. Later that evening, an agitated Cheryl goes into the woods to investigate strange noises. She gets attacked, stripped, pinned to the ground, and raped by demonically possessed trees. When she manages to escape and returns to the cabin bruised and anguished, Ash agrees to take her back into town, only to discover that the bridge to the cabin has been destroyed. Cheryl panics as she realizes that they are now trapped and the demonic entity will not let them leave. Back at the cabin, Ash listens to more of the tape, learning that the only way to kill the entity is to dismember a possessed host. As Linda and Shelly play Spades, Cheryl correctly calls out the cards, succumbs to the entity, and levitates. In a raspy, demonic voice, she demands to know why they disturbed her sleep and threatens to kill everyone. After she falls to the floor, the group checks on her; Cheryl stabs Linda in the ankle and throws Ash into a shelf. Scott knocks Cheryl into the cellar and locks her inside. Everyone fights about what to do. Shelly becomes paranoid upon seeing Cheryls demonic transformation. She lies down in her room but is drawn to look out of her window, where a demon crashes through and attacks her. Shelly becomes a Deadite and scratches Scotts face. Scott throws her into the fireplace, briefly burning Shellys face. As she attacks him again, Scott bisects part of her wrist with a knife and then she bites off her own mangled hand. Scott stabs her in the back with a Sumerian dagger, apparently killing her. When she reanimates, Scott dismembers her with an axe and buries the remains. Shaken by the experience, he leaves to find a way back to town. He shortly returns mortally wounded; he dies while warning Ash that the trees will not let them escape alive. When Ash checks on Linda, he is horrified to find that she has become possessed. She attacks him, but he stabs her with a Sumerian dagger. Unwilling to dismember her, he buries her instead. She revives and attacks him, forcing him to decapitate her with a shovel. Her headless body bleeds on his face as it tries to rape him, but he pushes it off and retreats to the cabin. Back inside, Ash is attacked by Cheryl, who has escaped the cellar, and the reanimated Scott. He shoots Cheryl several times, gouges Scotts eyes out, and pulls out a branch lodged in Scotts stomach, causing him to bleed out. The Deadites attack, bite, and beat Ash with a fire iron. Ash throws the Naturan Demanto into the fireplace, and the Deadites stop their attack. As the book burns, Scott, Cheryl, and the book gruesomely decompose. Demonic hands protrude from both corpses, and Cheryls decomposed body falls and splatters in front of Ash, leaving him covered in her and Scotts entrails. He hears a voice say "Join Us" but relaxes when it dies away. As day breaks, Ash stumbles outside. Before he can leave, an unseen entity rapidly trails the forest and runs through the cabin, breaking the cabin doors and attacks him from behind.' seqB_tokens = torch.tensor(tokenizer.encode(seqB)).unsqueeze(0) seqB_embeddings = model(seqB_tokens)[0] #code to compare seqA and seqB embeddings using cosine similarity ``` Here is the error: ```python Token indices sequence length is longer than the specified maximum sequence length for this model (899 > 512). Running this sequence through the model will result in indexing errors Traceback (most recent call last): File "/model_embeddings_test.py", line 48, in <module> seqB_embeddings = model(seqB_tokens)[0] File "/usr/local/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/transformers/transformers/modeling_bert.py", line 692, in forward embedding_output = self.embeddings(input_ids, position_ids=position_ids, token_type_ids=token_type_ids) File "/usr/local/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/transformers/transformers/modeling_roberta.py", line 60, in forward position_ids=position_ids) File "/transformers/transformers/modeling_bert.py", line 170, in forward position_embeddings = self.position_embeddings(position_ids) File "/usr/local/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 118, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/usr/local/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 1454, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range at /opt/conda/conda-bld/pytorch_1550796191843/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:191 ```
11-04-2019 18:35:47
11-04-2019 18:35:47
Hey, same here with distilgpt2 . Did you solve this?<|||||>@iedmrc I did not. I'm still waiting response from someone.<|||||>If your input sequence is too long then it cannot be fed to the model: it will crash as you have seen in your example. There are two ways to handle this: either shorten your sequence by truncating it (manually or via the `max_length` parameter in the `encode` method) or use a model that has a larger input sequence length.<|||||>Do you mean splitting by truncating? If it is not, why I have to truncate the sequence? Because gpt2 is not only capable of process just 512 or 1024 tokens. If you mean splitting, I use run_lm_finetuning.py and it seems like it already splits sequence to blocks [here](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L84). But still outputs the same warning.<|||||>I mean that you should take only the 1024 first tokens (if using GPT-2) of your sequence. As you have said, GPT-2 cannot handle more than 1024 tokens in a sequence. Yes, `run_lm_finetuning.py` outputs this warning as it first converts the entire sequence before splitting it into blocks. It is a warning but not an error. Please let me know if you get the same error (as @aclifton314): `RuntimeError: index out of range` when using the script.
transformers
1,725
closed
GPT2 text generation repeat
## ❓ Questions & Help **SYSTEM** OS: Linux pop-os 5.0.0 Python version: 3.6.8 Torch version: 1.3.0 Transformers version: 2.1.1 I am running this linux VM with the above software versions on a Windows 10 laptop. <!-- A clear and concise description of the question. --> I am running the following code: ```python import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel sentence = 'Natural language processing tasks are typically approached with' tokenizer = GPT2Tokenizer.from_pretrained('gpt2') context_tokens = tokenizer.encode(sentence, add_special_tokens=False) context = torch.tensor(context_tokens, dtype=torch.long) num_samples = 1 context = context.unsqueeze(0).repeat(num_samples, 1) generated = context model = GPT2LMHeadModel.from_pretrained('gpt2') model.eval() length = 20 with torch.no_grad(): for jj in range(5): for _ in range(length): outputs = model(generated) next_token_logits = outputs[0][:, -1, :] next_token = torch.argmax(next_token_logits, dim=-1).unsqueeze(-1) generated = torch.cat((generated, next_token), dim=1) out = generated out = out[:, len(context_tokens):].tolist() for o in out: text = tokenizer.decode(o, clean_up_tokenization_spaces=True) ``` What I was noticing was that GPT2 starts to produce repetitive text (see below) with this approach. I am not sure the best way to prevent this from happening and was wondering if others had any ideas? Thank you in advance! **OUTPUT** ``` a single task, such as a word search, and the task is then repeated. The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in ```
11-04-2019 16:49:58
11-04-2019 16:49:58
Adding **temperature** (in brief, _Temperature is a hyperparameter of LSTMs - and neural networks generally - used to control the randomness of predictions by scaling the logits before applying softmax_) could be an interesting way! Here is a modified version of your code _with temperature_: ``` import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel import torch.nn.functional as F sentence = 'Natural language processing tasks are typically approached with' tokenizer = GPT2Tokenizer.from_pretrained('gpt2') context_tokens = tokenizer.encode(sentence, add_special_tokens=False) context = torch.tensor(context_tokens, dtype=torch.long) num_samples = 1 context = context.unsqueeze(0).repeat(num_samples, 1) generated = context model = GPT2LMHeadModel.from_pretrained('gpt2') model.eval() length = 20 temperature = 0.8 # ADD TEMPERATURE PARAMETER! with torch.no_grad(): for jj in range(5): for _ in range(length): outputs = model(generated) next_token_logits = outputs[0][:, -1, :] / (temperature if temperature > 0 else 1.) ### CHANGE THIS ROW next_token = torch.multinomial(F.softmax(next_token_logits, dim=-1), num_samples=1) ### CHANGE THIS ROW generated = torch.cat((generated, next_token), dim=1) out = generated out = out[:, len(context_tokens):].tolist() for o in out: text = tokenizer.decode(o, clean_up_tokenization_spaces=True) print(text) ``` The output is the following: > a hand for 1-10 minutes. However, we had recently seen that a small set of tasks can be used to process many different languages in a short period of time. We had designed the program from scratch. The purpose of the program was to generate as many variables and as many basic rules as possible. Each rule got its own "factory". Each register gets its own "rules". The terms used are:<|endoftext|>Intel's #1-Buying Power-Technology Obviously, you can change **seed** and **temperature** itself too!<|||||>@TheEdoardo93 Thanks for the feedback! Closing this issue.<|||||>just have the same issue, anyone knows how to solve it? thx!<|||||>@drizzt00s Since this posting, HF has put out a fantastic blog about generating text utilizing different sampling methods. I highly recommend it. It's well written! https://huggingface.co/blog/how-to-generate Give that a read and see if it helps you out.
transformers
1,724
closed
Fix encode_plus
Add options to control more precisely the output of `encode_plus`. All the outputs that can't be ingested by a model as deactivated by default. `token_type_ids` can be ingested by most models and if thus activated by default but can be turned off. Fix #1532 among others
11-04-2019 16:09:17
11-04-2019 16:09:17
Looks good to me, this philosophy (only model-inputs and optional non-model-inputs) seems way better to me than the previous one.
transformers
1,723
closed
Fix #1623
Make use of the `--cache_dir` argument in all the examples that include it. cc @VictorSanh (distillation script) @LysandreJik (all the other scripts)
11-04-2019 15:22:58
11-04-2019 15:22:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=h1) Report > Merging [#1723](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c8f2712199771e313ab8901698b0886e1c1bf39d?src=pr&el=desc) will **increase** coverage by `1.18%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1723/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1723 +/- ## ========================================== + Coverage 83.95% 85.14% +1.18% ========================================== Files 94 94 Lines 13951 13920 -31 ========================================== + Hits 11713 11852 +139 + Misses 2238 2068 -170 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `95.45% <0%> (-0.91%)` | :arrow_down: | | [transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `70.55% <0%> (-0.54%)` | :arrow_down: | | [transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RyYW5zZm9feGwucHk=) | `75.16% <0%> (-0.28%)` | :arrow_down: | | [transformers/tests/tokenization\_utils\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl91dGlsc190ZXN0LnB5) | `96% <0%> (-0.16%)` | :arrow_down: | | [transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.28% <0%> (ø)` | :arrow_up: | | [transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbS5weQ==) | `88.42% <0%> (ø)` | :arrow_up: | | [transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2dwdDIucHk=) | `84.19% <0%> (+0.05%)` | :arrow_up: | | [transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.8% <0%> (+0.05%)` | :arrow_up: | | [transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.75% <0%> (+0.06%)` | :arrow_up: | | [transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.59% <0%> (+0.07%)` | :arrow_up: | | ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=footer). Last update [c8f2712...89d6272](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
1,722
closed
BUG for XLNet: Low GPU usage and High CPU usage, very low running speed!
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): XLNet Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [ ] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: XLNet used as encoder for seq2seq generation ## To Reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: Centos 7.6 * Python version: 3.6 * PyTorch version: 1.1.0 * PyTorch Transformers version (or branch): 1.2.0 * Using GPU ? V100, 16G * Distributed of parallel setup ? Use multiprocessing for multi GPU training * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
11-04-2019 15:06:29
11-04-2019 15:06:29
Hi, could you provide a minimal code that reproduces the issue so that we may help debug/check on our side?<|||||>hey @LysandreJik, I think I am experiencing the same/similar issue. Below are the sample code, a screenshot showing the high CPU and low GPU usage, and the `requirements.txt` file. In terms of technical setup, I used a docker container deployed with [vast.ai](https://vast.ai/) with RTX 2070S, CUDA 10.1, AMD Ryzen 5 2600X (6c/12t), 16GB. It used the [latest PyTorch image](https://hub.docker.com/r/pytorch/pytorch) as of the time of writing. From my observations the fluctuating CPU usage seems to max out at 600%, hinting at the 6 physical cores. However. I think I have experienced the issue with 2080 Ti and 1080 Ti, as well. Thank you! ```python # -*- coding: utf-8 -*- import torch from transformers import XLNetTokenizer, XLNetLMHeadModel import requests convai1_data = requests.get('http://convai.io/2017/data/train_full.json').json() for dial in convai1_data: utterances = [thread_line['text'] for thread_line in dial['thread']] dial['utterances'] = utterances dial['predictions'] = dict() convai1_data = [dial for dial in convai1_data if len(dial['utterances']) > 2] # https://github.com/huggingface/transformers/issues/917#issuecomment-525297746 def xlnet_sent_probability(PADDING_TEXT, text): tokenize_text = model_tokenizer.tokenize(text)[:512] tokenize_input = model_tokenizer.tokenize(PADDING_TEXT)[:511] + ['<eod>'] + tokenize_text sentence_word_probs = list() sentence_best_word_probs = list() best_words = list() for max_word_id in range((len(tokenize_input)-len(tokenize_text)), (len(tokenize_input))): sent = tokenize_input[:] input_ids = torch.tensor([model_tokenizer.convert_tokens_to_ids(sent)]) perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float) perm_mask[:, :, max_word_id:] = 1.0 target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float) target_mapping[0, 0, max_word_id] = 1.0 if torch.cuda.is_available(): input_ids = input_ids.cuda() perm_mask = perm_mask.cuda() target_mapping = target_mapping.cuda() with torch.no_grad(): outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping) next_token_logits = outputs[0] # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size] predicted_prob = torch.softmax(next_token_logits[0][-1], dim=-1) predicted_prob = predicted_prob.detach().cpu().numpy() word_id = model_tokenizer.convert_tokens_to_ids([tokenize_input[max_word_id]])[0] word_prob = predicted_prob[word_id].item() sentence_word_probs.append(word_prob) return sentence_word_probs for XLNET_MODEL in ['xlnet-base-cased', 'xlnet-large-cased']: model_tokenizer = XLNetTokenizer.from_pretrained(XLNET_MODEL) model = XLNetLMHeadModel.from_pretrained(XLNET_MODEL) if torch.cuda.is_available(): model = model.cuda() model = model.eval() for dial in convai1_data: utterances = dial['utterances'] sentences_word_probs = list() for u1, u2 in zip(utterances[:-1], utterances[1:]): sentence_word_probs = xlnet_sent_probability(u1, u2) sentences_word_probs.append(sentence_word_probs) dial['predictions'][XLNET_MODEL+'_sentences_word_probs'] = sentences_word_probs ``` [requirements.txt](https://github.com/huggingface/transformers/files/3883834/requirements.txt) [cpu_and_gpu_usage](https://user-images.githubusercontent.com/8143425/69497875-e9226f80-0ee1-11ea-8098-f81df3972f3e.png) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>The same problem. How to solve it?
transformers
1,721
closed
Add common getter and setter for input_embeddings & output_embeddings
This PR adds two attributes `input_embeddings` and `output_embeddings` as common properties for all the models. Simpler to write weights tying. Also superseed #1598. cc @rlouf
11-04-2019 11:30:25
11-04-2019 11:30:25
Couldn't get `input_embeddings` and `output_embeddings` to work well as python properties with our class inheritance hierarchy for some reason. Switching to simple `get_xxx` and `set_xxx` for now. Maybe let's investigate that again in the future if needed.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=h1) Report > Merging [#1721](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a62835577a2a93642546858b21372e43c1a1ff8?src=pr&el=desc) will **decrease** coverage by `1.18%`. > The diff coverage is `99.17%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1721/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1721 +/- ## ========================================== - Coverage 85.14% 83.95% -1.19% ========================================== Files 94 94 Lines 13920 13951 +31 ========================================== - Hits 11852 11713 -139 - Misses 2068 2238 +170 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/tests/tokenization\_bert\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9iZXJ0X3Rlc3QucHk=) | `89.47% <100%> (-9.2%)` | :arrow_down: | | [transformers/tests/modeling\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2NvbW1vbl90ZXN0LnB5) | `73.67% <100%> (-2.83%)` | :arrow_down: | | [transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2dwdDIucHk=) | `84.14% <100%> (-0.06%)` | :arrow_down: | | [transformers/tests/tokenization\_roberta\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9yb2JlcnRhX3Rlc3QucHk=) | `75.92% <100%> (-16.53%)` | :arrow_down: | | [transformers/tests/tokenization\_xlm\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl94bG1fdGVzdC5weQ==) | `82.22% <100%> (-15.51%)` | :arrow_down: | | [transformers/tests/tokenization\_distilbert\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9kaXN0aWxiZXJ0X3Rlc3QucHk=) | `63.63% <100%> (-31.61%)` | :arrow_down: | | [transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.08% <100%> (+0.53%)` | :arrow_up: | | [transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RyYW5zZm9feGwucHk=) | `75.43% <100%> (+0.27%)` | :arrow_up: | | [transformers/tests/modeling\_auto\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2F1dG9fdGVzdC5weQ==) | `33.89% <100%> (-64.41%)` | :arrow_down: | | [transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbS5weQ==) | `88.42% <100%> (ø)` | :arrow_up: | | ... and [26 more](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=footer). Last update [8a62835...b340a91](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This looks very clean, it looks good to me!<|||||>Thanks @LysandreJik
transformers
1,720
closed
run_generation.py Runtime error
## ❓ Questions & Help **Hi everyone,** I'm currently trying to execute the run_generation example script. However I can not get it to work. Tried to reinstall and played around with the parameters. However the error I get is always the same: **I'm running the example from the website:** _python run_generation.py \ --model_type=gpt2 \ --model_name_or_path=gpt2_ **It allocates resources etc. However when it is starting to generate I get this error:** _Namespace(device=device(type='cuda'), length=20, model_name_or_path='gpt2', model_type='gpt2', n_gpu=1, no_cuda=False, num_samples=1, padding_text='', prompt='', repetition_penalty=1.0, seed=42, stop_token=None, temperature=1.0, top_k=0, top_p=0.9, xlm_lang='') 0%| | 0/20 [00:00<?, ?it/s] Traceback (most recent call last): File "run_generation.py", line 264, in <module> main() File "run_generation.py", line 249, in main device=args.device, File "run_generation.py", line 142, in sample_sequence outputs = model(**inputs) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet/CTRL (cached hidden-states) File "/home/gstein/myPython/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/gstein/Executables/transformers/transformers/modeling_gpt2.py", line 533, in forward head_mask=head_mask) File "/home/gstein/myPython/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/gstein/Executables/transformers/transformers/modeling_gpt2.py", line 373, in forward input_ids = input_ids.view(-1, input_shape[-1]) RuntimeError: cannot reshape tensor of 0 elements into shape [-1, 0]_ ## Environment * OS: Ubuntu 16.04.3 * Python version: 3.5.2 * Torch version 1.0 * cloned from Git 4. November 2019: * Using GPU: Yes/No ##Thoughts? Does anyone else encounter this behavior? Or even better: Does somebody has a solution for it?)
11-04-2019 11:23:25
11-04-2019 11:23:25
### Description Running `python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2` in my environment works as expected. My suggestions: - update PyTorch to latest version with `pip install --upgrade torch` - try to install Transformers' library from branch master, and not use the one installed with PyPi - try to use the CPU for inference (typically, we'll use GPU/TPU only in training a model, whereas in inference mode we'll use CPU) --> add **--no_cuda** when calling `run_generation.py` script ### My environment - __Python__ '3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \n[GCC 7.3.0]' - __PyTorch__ v1.3.0 - __HuggingFace's Transformers__ v2.1.1 (installed **not** from PyPi, but with **git clone from master branch**) - __O.S.__ 'Linux-4.15.0-66-generic-x86_64-with-debian-buster-sid' - __Device__ tried with CPU and GPU (both work as expected) ### Output Example ```2019-11-04 14:16:59.671926: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2019-11-04 14:16:59.684347: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-11-04 14:16:59.684998: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: GeForce GTX 980 Ti major: 5 minor: 2 memoryClockRate(GHz): 1.076 pciBusID: 0000:01:00.0 2019-11-04 14:16:59.685168: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2019-11-04 14:16:59.686072: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 2019-11-04 14:16:59.686812: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0 2019-11-04 14:16:59.686987: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0 2019-11-04 14:16:59.687996: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0 2019-11-04 14:16:59.688758: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0 2019-11-04 14:16:59.688840: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudnn.so.7'; dlerror: libcudnn.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64 2019-11-04 14:16:59.688849: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... 2019-11-04 14:16:59.689057: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-11-04 14:16:59.712088: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3408000000 Hz 2019-11-04 14:16:59.712609: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x561d09196760 executing computations on platform Host. Devices: 2019-11-04 14:16:59.712622: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version 2019-11-04 14:16:59.752521: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-11-04 14:16:59.753095: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x561d09161190 executing computations on platform CUDA. Devices: 2019-11-04 14:16:59.753111: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): GeForce GTX 980 Ti, Compute Capability 5.2 2019-11-04 14:16:59.753166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-11-04 14:16:59.753173: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 11/04/2019 14:17:00 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json from cache at /home/vidiemme/.cache/torch/transformers/f2808208f9bec2320371a9f5f891c184ae0b674ef866b79c58177067d15732dd.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71 11/04/2019 14:17:00 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt from cache at /home/vidiemme/.cache/torch/transformers/d629f792e430b3c76a1291bb2766b0a047e36fae0588f9dbc1ae51decdff691b.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda 11/04/2019 14:17:01 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json from cache at /home/vidiemme/.cache/torch/transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.085d5f6a8e7812ea05ff0e6ed0645ab2e75d80387ad55c1ad9806ee70d272f80 11/04/2019 14:17:01 - INFO - transformers.configuration_utils - Model config { "attn_pdrop": 0.1, "embd_pdrop": 0.1, "finetuning_task": null, "initializer_range": 0.02, "is_decoder": false, "layer_norm_epsilon": 1e-05, "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_layer": 12, "n_positions": 1024, "num_labels": 1, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pruned_heads": {}, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "torchscript": false, "use_bfloat16": false, "vocab_size": 50257 } 11/04/2019 14:17:01 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-pytorch_model.bin from cache at /home/vidiemme/.cache/torch/transformers/4295d67f022061768f4adc386234dbdb781c814c39662dd1662221c309962c55.778cf36f5c4e5d94c8cd9cefcf2a580c8643570eb327f0d4a1f007fab2acbdf1 11/04/2019 14:17:12 - INFO - __main__ - Namespace(device=device(type='cuda'), length=20, model_name_or_path='gpt2', model_type='gpt2', n_gpu=1, no_cuda=False, num_samples=1, padding_text='', prompt='', repetition_penalty=1.0, seed=42, stop_token=None, temperature=1.0, top_k=0, top_p=0.9, xlm_lang='') Model prompt >>> Hi. I'm Edward and I'm a journalist. 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 27.07it/s] This writer lost her health and is now ill. But I want to show you something. That's Model prompt >>> ```<|||||>@Yondijr could you try to update your Torch version to a more recent version than 1.0 and let us know if it fixes the issue?<|||||>@LysandreJik Yes! However it **ONLY** works with torch 1.3 with GPU support. Interestingly even with the 1.3-cpu version I got the same error. Thanks for the tip :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,719
closed
Can we fine tune GPT2 using multiple inputs?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hello. I don't think this is possible looking at the code, but to make sure is it possible to use multiple input texts to fine tune GPT-2? For example, I have 5 news articles. Do I submit them all in one document or should I separate it out? I feel like the latter makes more sense especially if the articles are all written differently. Thanks!
11-04-2019 09:33:50
11-04-2019 09:33:50
You can merge them in the same document but add a token that indicates it's the beginning and end of each article. This will allow the generator to understand the concept of a document being a group of text and there are many of those groups in your document. **Example (assuming one line is one document):** ``` <sod> Today was a good day! <eod> <sod> Tomorrow is going to be even better <eod> ``` Then at the text generation step (e.g. using ```run_generation.py```), you can specify a ```stop_token``` argument if you want to generate one document at a time.<|||||>Fantastic! Thank you for sharing @enzoampil 👍
transformers
1,718
closed
Hello, how to upload a .ckpt file in TFBertForSequenceClassification?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I find the TFBertForSequenceClassification only can upload .h5 file
11-04-2019 08:11:44
11-04-2019 08:11:44
What model is referring to this .ckpt file? If the model is BERT or ALBERT, you can: - use the `convert_X_original_tf_checkpoint_to_pytorch --tf_checkpoint_path=dir/model.ckpt-xxx`, where _X_ is albert or bert - load the .pt model into Transformers as usual > ## Questions & Help > I find the TFBertForSequenceClassification only can upload .h5 file<|||||>@TheEdoardo93 if load the .pt model into Transformers as usual. I think it will be pytorch model. But I want use the TFBertForSequenceClassification to train with the model.fit, and with the mutil gpu with tf Strategy. What can I do? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,717
closed
Retaining unknown token behaver consistency in tokenizer for BERT and XLNET
I found the BERT tokenizer and XLNET tokenizer behave difference. for example, `"His name is 燚"` In BERT, it's tokenized as: `['his', 'name', 'is', '[UNK]']`, but in XLNET, it's tokenized as: `['▁His', '▁name', '▁is', '▁', '燚']` The difference may adverse to the uniform training frame for BERT and XLNET, and trouble the length-based processing.
11-04-2019 08:05:13
11-04-2019 08:05:13
Hi, thanks for this but this was the expected behavior for the Bert tokenizer.
transformers
1,716
closed
add qa and result
11-04-2019 05:04:28
11-04-2019 05:04:28
this is a template for albert qa(squad). all modification is on the directory "transformers/new_template ", "examples/qa_albert/template" the true tree structure you can refer to https://github.com/pohanchi/huggingface_albert, test squad 1.1 on albert_base and albert_xlarge is very close to the original paper <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=h1) Report > Merging [#1716](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a62835577a2a93642546858b21372e43c1a1ff8?src=pr&el=desc) will **decrease** coverage by `5.09%`. > The diff coverage is `0%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1716/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1716 +/- ## ========================================= - Coverage 85.14% 80.04% -5.1% ========================================= Files 94 99 +5 Lines 13920 14806 +886 ========================================= Hits 11852 11852 - Misses 2068 2954 +886 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/new\_template/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/1716/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL25ld190ZW1wbGF0ZS90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `0% <0%> (ø)` | | | [transformers/new\_template/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/1716/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL25ld190ZW1wbGF0ZS9tb2RlbGluZ19hbGJlcnQucHk=) | `0% <0%> (ø)` | | | [transformers/new\_template/optimization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/1716/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL25ld190ZW1wbGF0ZS9vcHRpbWl6YXRpb25fYWxiZXJ0LnB5) | `0% <0%> (ø)` | | | [transformers/new\_template/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/1716/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL25ld190ZW1wbGF0ZS9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `0% <0%> (ø)` | | | [transformers/new\_template/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/1716/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL25ld190ZW1wbGF0ZS9fX2luaXRfXy5weQ==) | `0% <0%> (ø)` | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=footer). Last update [8a62835...38bba06](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi @pohanchi, great work on your ALBERT implementation! We're in the process of merging #1683 which already implements ALBERT, and would love your opinion on the implementation.<|||||>Because I don’t know how to upload the file to your team server( https://amazonS3....), so something roughly method I use is that using drive to store some big file (pytorch model state dict)and let people download by themselves from drive. I just want to know that whether to do that. By the way, I am a beginner of coderev .what is the converge mean ? And the bigger is good or other XD. On Tue, Nov 5, 2019 at 00:44 Lysandre Debut <[email protected]> wrote: > Hi @pohanchi <https://github.com/pohanchi>, great work on your ALBERT > implementation! We're in the process of merging #1683 > <https://github.com/huggingface/transformers/pull/1683> which already > implements ALBERT, and would love your opinion on the implementation. > > — > You are receiving this because you were mentioned. > > > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/pull/1716?email_source=notifications&email_token=AIEAE4FDQIOBGB3ZCSXYFJ3QSBGOTA5CNFSM4JIPCM6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC75BDQ#issuecomment-549441678>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AIEAE4EGLCYTJRVDCKNDRRDQSBGOTANCNFSM4JIPCM6A> > . > <|||||>My Repository is using lamb optimizer to train albert model. If you use lamb optimizer, make sure adam_epsilon set up to 1e-12 and weight decay set to 0.1 this will help model become stable in training<|||||>also the learning rate need to small than 5e-5 if large, sometimes performance will drop. <|||||>That's nice to know, thanks @pohanchi <|||||>Closing this since ALBERT is now in the library.
transformers
1,715
closed
Retaining unknown token behaver consistency in tokenizer for BERT and XLNET
I found the BERT tokenizer and XLNET tokenizer behave difference. for example, `"His name is 燚"` In BERT, it's tokenized as:` ['his', 'name', 'is', '[UNK]'],` but in XLNET, it's tokenized as:` ['▁His', '▁name', '▁is', '▁', '燚']` The difference may adverse to the uniform training frame for BERT and XLNET, and trouble the length-based processing.
11-04-2019 04:22:37
11-04-2019 04:22:37
transformers
1,714
closed
How to train from scratch
I would like to train the model from scratch. How can I drop the trained weight? using the same architecture for Gpt2
11-03-2019 21:03:54
11-03-2019 21:03:54
If you want to randomly initialize a model simply initialize it via its constructor rather than from the `from_pretrained` method: ```py from transformers import GPT2Config, GPT2Model config = GPT2Config() # define your configuration here model = GPT2Model(config) # Initialize your model from your config ```<|||||>@LysandreJik Thanks for the input. I did something like this ``` config = GPT2Config(vocab_size) model = GPT2Model(config) ``` Apart from vocab size, I'm keeping everything else to default value how do I make sure that it doesn't have any pre-trained value?<|||||>The values are only loaded if your instantiate the model by calling ˋGPT2Model.from_pretrained`, so you’re fine 🙂<|||||>@rlouf Thanks
transformers
1,713
closed
How to mask lm_labels and compute loss? --- Finetune gpt2: masking the lm_labels with '-1' and padding increase the perplexity a lot!
Hi, I wanted to finetune gpt2 in a seq2seq format. For that I followed the same approach used for convAI2 explained [here](https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313). I followed two steps and each step highly increase the ppl. 1) first I masked part of the lm_labels (with '-1') which I didn't wanted to calculate loss for. Doing that I realized the ppl increased but then I also try second step below 2) I padded my input to a maximum length 256 and masked the padded indices as well. Doing that increases the valid ppl (while training) starting from ```perplexity = tensor(6.8035e+10)``` rather than ```perplexity = tensor(17.67)``` when I don't mask lm_labels for padded indices and parts of input sequence that I don't want to compute loss for. This ppl difference is huge and I am not sure if I have to taken care of sth else beside just masking lm_labels with '-1'? Here is how I modify ```run_lm_finetuning.py``` script to pad my samples to a same length: ``` def pad(x, padding, padding_length=128): return x + [padding] * (padding_length - len(x)) class TextDataset(Dataset): def __init__(self, tokenizer, file_path='train', block_size=512): assert os.path.isfile(file_path) directory, filename = os.path.split(file_path) cached_features_file = os.path.join(directory, 'cached_lm_padded' + str(block_size) + '_' + filename) if os.path.exists(cached_features_file): logger.info("Loading features from cached file %s", cached_features_file) with open(cached_features_file, 'rb') as handle: self.examples = pickle.load(handle) else: logger.info("Creating features from dataset file at %s", directory) self.examples = [] with open(file_path, encoding="utf-8") as f: text = f.readlines() text = [i.strip() for i in text] for sent in text: tokenized_text = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(sent)) if len(tokenized_text) > block_size: # Truncate in block of block_size self.examples.append(tokenizer.build_inputs_with_special_tokens(tokenized_text[:block_size])) else: self.examples.append(pad(tokenized_text, tokenizer.convert_tokens_to_ids('<pad>'), block_size)) logger.info("Saving features into cached file %s", cached_features_file) with open(cached_features_file, 'wb') as handle: pickle.dump(self.examples, handle, protocol=pickle.HIGHEST_PROTOCOL) def __len__(self): return len(self.examples) def __getitem__(self, item): return torch.tensor(self.examples[item]) ``` And this is how I mask the lm_labels, given labels =input_ids and start and end being my desired indices to mask all indices before start and after end (exclusive): ``` def mask_intervals(labels, start, end): x = labels.clone() for i in range(x.shape[0]): start_index = (labels[i, :] == start).nonzero() end_index = (labels[i, :] == end).nonzero() x[i, :start_index + 1] = -1 x[i, end_index:] = -1 return x ``` I want to know if masking with '-1' suffices or should I change some other part of the code regarding loss computation?
11-03-2019 20:52:53
11-03-2019 20:52:53
Can any one help regarding this question? I really appreciate that. Is there any example of finetuning and masking the context and padded indices? I am following [this](https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313), but I get a uge perplexity compared to when I don't mask.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,712
closed
Scheduler documentation blocks subtly wrong
## Summary Hi, I noticed that in `optimization.py` the way many of the schedulers describe the learning rate is slightly wrong. For example, `WarmupLinearSchedule` says ``` Linearly increases learning rate from 0 to 1 over `warmup_steps` training steps. Linearly decreases learning rate from 1. to 0. over remaining `t_total - warmup_steps` steps. ``` It actually multiplies the learning rate (set in the optimizer) by this amount. I understand that this is probably implied, but I think the documentation could be a bit clearer on this aspect! Many thanks, Dom
11-03-2019 20:45:11
11-03-2019 20:45:11
Indeed, happy to welcome a PR on that if you want to fix this<|||||>Closed by https://github.com/huggingface/transformers/pull/1737
transformers
1,711
closed
transformers module doesn't work with torch compiled on Cuda 10.0?
Hi, I have been using this repo for long time and it was working okay. Very recently it gave me ```ModuleNotFoundError: No module named 'transformers'``` error and I had to reinstall it from source. It seems like during installation my torch version (or the Cuda it is compiled on) got updated. I reinstall my PyTorch with cudatoolkit=10.0 since I have Cuda 10.0 and I cannot update it now. But I realized doing that, I am not able to import transformers again. Any idea what should I do?
11-03-2019 19:58:42
11-03-2019 19:58:42
Solved! closing.
transformers
1,710
closed
CTRL does not react to the "seed" argument
Using latest master branch and run_generation.py. When passing different seeds to GPT-2 it produces different results, as expected, however CTRL does not. **GPT-2** python run_generation.py **--seed=42** --model_type=gpt2 --length=10 --model_name_or_path=gpt2 --prompt="Transformers movie is" **>>> a great example of bringing early, action-packed** python run_generation.py **--seed=22** --model_type=gpt2 --length=10 --model_name_or_path=gpt2 --prompt="Transformers movie is" **>>> a lot of fun. It's a decent one** **CTRL** python run_generation.py -**-seed=42** --model_type=ctrl --length=10 --model_name_or_path=ctrl --temperature=0 --repetition_penalty=1.2 --prompt="Links Transformers movie is" **>>> a big deal for the company, and it has been** python run_generation.py **--seed=22** --model_type=ctrl --length=10 --model_name_or_path=ctrl --temperature=0 --repetition_penalty=1.2 --prompt="Links Transformers movie is" **>>> a big deal for the company, and it has been**
11-03-2019 17:06:29
11-03-2019 17:06:29
Noticed the same thing on the Salesforce version as well. <|||||>Do you test with [this url](https://github.com/salesforce/ctrl/blob/master/generation.py)? @tanselmi If not, please specify which script do you try out N.B. if you see this code, the lines 40-41-42-43 assign the seed passed as argument (default value is 1337) for NumPy, PyTorch and CUDA. It's strange that the internal seed doesn't modify the CTRL's model output.<|||||>> Noticed the same thing on the Salesforce version as well. In general, is it a bug in the code implementation of the CTRL model or a "model property" (i.e. the CTRL model is not affect by a random seed set at the start of the process) ?<|||||>Hi, this is not due to a bug, this is due to your argument `temperature=0`, which implies greedy sampling via argmax, rather than polynomial sampling like it is usually done. You can try it out with a temperature of 0.2: With seed 42 - Links Transformers movie is **a huge hit in the box office and has been** With seed 22 - Links Transformers movie is **a big deal for the studio. The first film was**<|||||>Thanks for the update and explanation. I will give it a go. On Mon, Nov 4, 2019, 08:37 Lysandre Debut <[email protected]> wrote: > Hi, this is not due to a bug, this is due to your argument temperature=0, > which implies greedy sampling via argmax, rather than polynomial sampling > like it is usually done. > > You can try it out with a temperature of 0.2: > > With seed 42 - Links Transformers movie is *a huge hit in the box office > and has been* > With seed 22 - Links Transformers movie is *a big deal for the studio. > The first film was* > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/1710?email_source=notifications&email_token=ABO5AYW55TLAXKGM2PODQJTQSAXSZA5CNFSM4JIMFK42YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC7OTJY#issuecomment-549382567>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABO5AYWK6JXQUIHOLWNNXG3QSAXSZANCNFSM4JIMFK4Q> > . > <|||||>@LysandreJik I confirm setting temperature to non-zero makes the seed argument work fine. Thank you!
transformers
1,709
closed
Fixing mode in evaluate during training
This fixs the error of not passing mode while using the option of evaluate_during_training option.
11-03-2019 10:46:43
11-03-2019 10:46:43
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1709?src=pr&el=h1) Report > Merging [#1709](https://codecov.io/gh/huggingface/transformers/pull/1709?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a62835577a2a93642546858b21372e43c1a1ff8?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1709/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1709?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1709 +/- ## ======================================= Coverage 85.14% 85.14% ======================================= Files 94 94 Lines 13920 13920 ======================================= Hits 11852 11852 Misses 2068 2068 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1709?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1709?src=pr&el=footer). Last update [8a62835...e5b1048](https://codecov.io/gh/huggingface/transformers/pull/1709?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ok thanks!
transformers
1,708
closed
The id of the word obtained by tokenizer.encode does not correspond to the id of the word in vocab.txt
## ❓ Questions & Help <!-- A clear and concise description of the question. --> hello,I encountered a problem,code show as below. tokenizer=BertTokenizer.from_pretrained('bert-base-uncased') input_ids=torch.LongTensor(tokenizer.encode("[CLS] the 2018 boss nationals [SEP]")) print(input_ids) The result is:tensor([ 101, 1996, 2760, 5795, 10342, 102]) , but In the voab.txt, the id of "the" word is 8174, not 1996. Why is this the reason for the berttokenizer token dictionary error?
11-03-2019 09:31:11
11-03-2019 09:31:11
Hi, in the `bert-base-uncased` vocab.txt file, the id of the token "the" is 1996. You can see it [inside the file, where "the" is on the row 1996](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt). You can also check it out directly in the tokenizer with the following command: ```py tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") tokenizer.vocab.get("the") # 1996 ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,707
closed
bugs with run_summarization script
Hi The evaluation part of this script is missing, I tried to complete it myself but this code does not generation proper string during decoding part for me and does not converege, encoder and decoder weights are not tied as the paper, could you please add the generated text as the output of evaluation code, also please complete the evaluation code, maybe you could use some codes in run_generation.py for this, thanks
11-03-2019 09:08:05
11-03-2019 09:08:05
The summarization/generation scripts are still work in progress. They should be included in the next release.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,706
closed
Regression Loss
I want `BertForSequenceClassification` to compute regression loss. I load the pre-trained model through ``` config = BertConfig.from_pretrained('bert-base-uncased') config.num_labels = 1 # Is it proper to set the num_label in this way? model = BertForSequenceClassification(config) ``` How shall I change the num_labels properly? Thank you!
11-03-2019 03:17:08
11-03-2019 03:17:08
That is a good way to load the labels! Please be aware, however, that by loading the configuration this way you will not load the weights for your model associated with the `bert-base-uncased` checkpoint as you would have done had you used the `BertForSequenceClassification.from_pretrained(...)` method.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,705
closed
unable to import from utils_squad
Hi, I am using BertForQuestionAnswering on colab and I have installed ` !pip install transformers !pip install pytorch-transformers ` when I import ` from utils_squad import (read_squad_examples, convert_examples_to_features) ` I get the following error: ModuleNotFoundError: No module named 'utils_squad' any solution to this? Greetings!
11-02-2019 22:40:52
11-02-2019 22:40:52
try this ``` %%bash rm -r hugging-face-squad mkdir hugging-face-squad echo > hugging-face-squad/__init__.py cd hugging-face-squad wget 'https://raw.githubusercontent.com/huggingface/pytorch-transformers/master/examples/run_squad.py' wget 'https://raw.githubusercontent.com/huggingface/pytorch-transformers/master/examples/utils_squad.py' wget 'https://raw.githubusercontent.com/huggingface/pytorch-transformers/master/examples/utils_squad_evaluate.py' sed -i 's/utils_squad_evaluate/.utils_squad_evaluate/g' utils_squad.py sed -i 's/utils_squad/.utils_squad/g' run_squad.py import importlib hfs = importlib.import_module('.run_squad', package='hugging-face-squad') examples = hfs.read_squad_examples(input_file="train-v2.0.json", is_training=True, version_2_with_negative=True) ``` <|||||>For me, just adding ``` !wget 'https://raw.githubusercontent.com/huggingface/transformers/master/examples/run_squad.py' !wget 'https://raw.githubusercontent.com/huggingface/pytorch-transformers/master/examples/utils_squad.py' !wget 'https://raw.githubusercontent.com/huggingface/pytorch-transformers/master/examples/utils_squad_evaluate.py' ``` In the same directory as my script worked<|||||>> try this > > ``` > %%bash > rm -r hugging-face-squad > mkdir hugging-face-squad > echo > hugging-face-squad/__init__.py > cd hugging-face-squad > wget 'https://raw.githubusercontent.com/huggingface/pytorch-transformers/master/examples/run_squad.py' > wget 'https://raw.githubusercontent.com/huggingface/pytorch-transformers/master/examples/utils_squad.py' > wget 'https://raw.githubusercontent.com/huggingface/pytorch-transformers/master/examples/utils_squad_evaluate.py' > sed -i 's/utils_squad_evaluate/.utils_squad_evaluate/g' utils_squad.py > sed -i 's/utils_squad/.utils_squad/g' run_squad.py > > import importlib > hfs = importlib.import_module('.run_squad', package='hugging-face-squad') > > examples = hfs.read_squad_examples(input_file="train-v2.0.json", > is_training=True, > version_2_with_negative=True) > @manishiitg ...For this also getting errors....utils_squad.py, & utils_squad_evaluate.py, was not found. <|||||>Hi, when I was trying to wget the scripts, it says 404 not found... Probably it is been removed somewhere? `!wget 'https://raw.githubusercontent.com/huggingface/transformers/master/examples/run_squad.py'` `Proxy request sent, awaiting response... 404 Not Found 2020-05-21 20:18:30 ERROR 404: Not Found. `<|||||>I believe the code has been refactored a while ago, maybe try the Dec version of transformers, or update your code to get the location of the new data processors<|||||>Hi, did you solve the problem ? For me it is the same issue and I just cant find a solution how to get the proper python scripts ? It would be super kind if you could help me :)<|||||>I was facing this issue and finally, after some thinking, I tried finding utils_squad.py and utils_squad_evaluate.py from this [repo](https://github.com/nlpyang/pytorch-transformers) and copied the link of the raw version of those two files. So these commands won't give a 404 error and import then worked for me:) ` !wget 'https://raw.githubusercontent.com/nlpyang/pytorch-transformers/master/examples/utils_squad.py' ` ` !wget 'https://raw.githubusercontent.com/nlpyang/pytorch-transformers/master/examples/utils_squad_evaluate.py' `<|||||>Thanks @lazyCodes7!<|||||>> Thanks @lazyCodes7! cool!
transformers
1,704
closed
loss is nan, for training on MNLI dataset
## ❓ Questions & Help Recently I read a tutorial https://medium.com/tensorflow/using-tensorflow-2-for-state-of-the-art-natural-language-processing-102445cda54a which you can also see in this notebook https://colab.research.google.com/drive/16ClJxutkdOqXjBm_PKq6LuuAuLsInGC- In this tutorial, MRPC dataset is used. I changed the dataset from MRPC to MNLI and you can check the changes in this notebook, in the corresponding code cells. https://colab.research.google.com/drive/1mzYkrAW5XUwey4FJIU9SN0Q0RSn0tiDb with MNLI dataset, I see the following issue (\*) : print("Fine-tuning BERT on MRPC") bert_history = bert_model.fit(bert_train_dataset, epochs=3, validation_data=bert_validation_dataset) Fine-tuning BERT on MNLI Epoch 1/3 352/Unknown - 511s 1s/step - loss: nan - accuracy: 0.3416 (\*) You can see here that the loss is nan with MRPC the corresponding output is: print("Fine-tuning BERT on MRPC") bert_history = bert_model.fit(bert_train_dataset, epochs=3, validation_data=bert_validation_dataset) Fine-tuning BERT on MRPC Epoch 1/3 15/Unknown - 44s 3s/step - loss: 0.6623 - accuracy: 0.6183 ``` The only differences that I see in those two tutorials is the following: For MNLI there is a match and mismatch validation datasets, and I provide the label_list=['0', '1', '2'] in ``` glue_convert_examples_to_features ``` Could someone help me why this issue occurs? Thanks
11-02-2019 18:58:34
11-02-2019 18:58:34
extra details: example = list(bert_validation_matched_dataset.__iter__())[0] example ``` {'hypothesis': <tf.Tensor: id=4914985, shape=(64,), dtype=string, numpy= array([b'The St. Louis Cardinals have always won.', b'The fortress was built a number of years after the caravanserai.', b'Mastihohoria is a collection of twenty mastic villages built be the genoese.', b'Reggae is the most popular music style in Jamaica.', b'I am able to receive mail at my workplace.', b'Men have lower levels of masculinity than in the decade before now.', b'Clinton has several similarities to Whitewater or Flytrap.', b'A search has been conducted for an AIDS vaccine.', b'We can acknowledge there is fallout from globalization around the world.', b'My feelings towards pigeons are filled with animosity.', b'She could see through the ghosts with ease.', b'Leading organizations want to be sure their processes are successful.', b'The Postal Service spends considerable sums on cost analysis.', b'Indeed we got away from the original subject.', b'Economic growth continued apace, with many people employed by the railroad repair shop.', b'Neither side is actually interested in a settlement at this time.', b'The rooms are opulent, and used for formal, elegant events.', b"The East side of the square is where the Old King's House stands.", b'The islands are part of France now instead of just colonies.', b'A stoichiometry of 1.03 is typical when the FGD process is not producing gypsum by-product', b'You can hire the equipment needed for windsurfing at Bat Galim Beach. ', b"There isn't enough room for an airport on the island.", b'The setup rewards good business practices.', b'People sacrifice their lives for farmers and slaves.', b"She doesn't like people like me. ", b"It's nothing like a drug hangover.", b'The philosophy was to seize opportunities when the economy is doing poorly.', b"Bill Clinton isn't a rapist", b'Various episodes depict that he is a member.', b"Bellagio's water display was born from this well received show.", b'Fannie Mae had terrible public-relations.', b"Gododdin's accomplishments have been recorded in a Welsh manuscript.", b'I can imagine how you are troubled by insects up there', b'Howard Berman is a Democrat of the House.', b'Gore dodged the draft.', b"Jon was glad that she wasn't. ", b'Section 414 helps balance allowance allocations for units.', b'Reducing HIV is important, but there are also other worthy causes.', b'I think there are some small colleges that are having trouble.', b'The best hotels in the region are in Hassan. ', b'She mentioned approaching the law with a holistic approach/', b"Select this option for Finkelstein's understanding of why this logic is expedient.", b"It's impossible to have a plate hand-painted to your own design in Hong Kong.", b"We could annex Cuba, but they wouldn't like that.", b'She really needs to mention it', b"The basics don't need to be right first.", b'Standard Costing was applied to the ledger.', b'The exhibition was too bare and too boring. ', b'The uncle had no match in administration; certainly not in his inefficient and careless nephew, Charles Brooke.', b'The Legacy Golf Club is just inside city limits.', b'They do not give money to legal services.', b'Do you want some coffee?', b'In 1917, the Brittish General Allenby surrendered the city using a bed-sheet.', b'Daniel explained what was happening.', b'That never happened to me.', b'You would have a prescription.', b"It's lovely speaking with you. ", b'Each Pokemon card pack is filled with every rare card a kid could want.', b'He generally reports very well on all kinds of things.', b'The final rule was declared not to be an economically significant regulator action.', b'Dana, this conversation bored me.', b'Andratx is on the northwest coast and the Cape of Formentor is further east.', b'Sunblock is an unnecessary precaution if you are in the water.', b'U.S. consumers and factories in East Asia benefit from imports.'], dtype=object)>, 'idx': <tf.Tensor: id=4914986, shape=(64,), dtype=int32, numpy= array([3344, 3852, 5009, 5398, 2335, 647, 7823, 8927, 2302, 4800, 8628, 637, 7756, 2189, 3146, 8990, 4759, 2592, 96, 5144, 2373, 7698, 2862, 1558, 7639, 3860, 416, 5768, 9299, 3149, 2927, 5914, 4960, 2880, 8203, 7787, 7556, 6465, 9781, 4053, 1217, 7178, 39, 8885, 6666, 8157, 3995, 1758, 5552, 4476, 3325, 7537, 7940, 8409, 7899, 4104, 2874, 4845, 3934, 5351, 2982, 5235, 2614, 6318], dtype=int32)>, 'label': <tf.Tensor: id=4914987, shape=(64,), dtype=int64, numpy= array([2, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 2, 0, 2, 0, 1, 2, 1, 1, 2, 2, 1, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 1, 0, 0, 2, 0, 2, 2, 1, 2, 2, 2, 2, 2, 2, 0, 2, 0, 0, 2, 1, 2, 2, 1, 2, 0])>, 'premise': <tf.Tensor: id=4914988, shape=(64,), dtype=string, numpy= array([b"yeah well losing is i mean i'm i'm originally from Saint Louis and Saint Louis Cardinals when they were there were uh a mostly a losing team but", b'Beside the fortress lies an 18th-century caravanserai, or inn, which has been converted into a hotel, and now hosts regular folklore evenings of Turkish dance and music.', b'The twenty mastic villages known collectively as mastihohoria were built by the Genoese in the 14 15th centuries.', b'Jamaican music ska and, especially, reggae has since the 1970s been exported and enjoyed around the world.', b'for me now the address is the same you know my my office address', b'[W]omen mocking men by calling into question their masculinity is also classified as sexual harassment, the paper added.', b"Watergate remains for many an unhealed wound, and Clinton's critics delight in needling him with Watergate comparisons--whether to Whitewater or Flytrap.", b'The search for an AIDS vaccine currently needs serious help, with the U.S. government, the biggest investor in the effort, spending less than 10 percent of its AIDS-research budget on the problem.', b'First, we can acknowledge, and maybe even do something about, some of the disaffecting fallout from globalization, such as pollution and cultural dislocation.', b'I hate pigeons.', b'From that spot she could see all of them and, should she need to, she could see through them as well.', b'We also have found that leading organizations strive to ensure that their core processes efficiently and effectively support mission-related outcomes.', b'Also, considerable sums are spent by the Postal Service analyzing the costs associated with worksharing, and mailers/competitors incur considerable expense litigating their positions on worksharing before the Postal Rate Commission.', b'yeah well we veered from the subject', b'Growth continued for ten years, and by 1915 the town had telephones, round-the-clock electricity, and a growing population many of whom worked in the railroad repair shop.', b"And if, as ultimately happened, no settlement resulted, we could shrug our shoulders, say, 'Hey, we tried,' and act like unsuccessful brokers to an honorable peace.", b'Lavishly furnished and decorated, with much original period furniture, the rooms are used for ceremonial events, visits from foreign dignitaries, and EU meetings.', b"On the west side of the square is Old King's House (built in 1762), which was the official residence of the British governor; it was here that the proclamation of emancipation was issued in 1838.", b'All of the islands are now officially and proudly part of France, not colonies as they were for some three centuries.', b'8 A stoichiometry of 1.03 is typical when the FGD process is producing gypsum by-product, while a stoichiometry of 1.05 is needed to produce waste suitable for a landfill.', b' The equipment you need for windsurfing can be hired from the beaches at Tel Aviv (marina), Netanya, Haifa (at Bat Galim beach), Tiberias, and Eilat.', b'Since there is no airport on the island, all visitors must arrive at the port, Skala, where most of the hotels are located and all commercial activity is carried out.', b'The entire setup has an anti-competitive, anti-entrepreneurial flavor that rewards political lobbying rather than good business practices.', b'Why bother to sacrifice your lives for dirt farmers and slavers?', b'She hates me."', b'and the same is true of the drug hangover you know if you', b'In the meantime, the philosophy is to seize present-day opportunities in the thriving economy.', b'Most of the Clinton women were in their 20s at the time of their Clinton encounter', b'On various episodes he is a member, along with Bluebeard and the Grim Reaper, of the Jury of the Damned; he takes part in a snake-bludgeoning (in a scandal exposed by a Bob Woodward book); his enemies list is used for dastardly purposes; even his dog Checkers is said to be bound for hell.', b'This popular show spawned the aquatic show at the Bellagio.', b"Not surprisingly, then, Fannie Mae's public-relations operation is unparalleled in Washington.", b'Little is recorded about this group, but they were probably the ancestors of the Gododdin, whose feats are told in a seventh-century Old Welsh manuscript.', b'i understand i can imagine you all have much trouble up there with insects or', b'Howard Berman of California, an influential Democrat on the House International Relations Committee.', b'An article explains that Al Gore enlisted for the Vietnam War out of fealty to his father and distaste for draft Gore deplored the inequity of the rich not having to serve.', b"I am glad she wasn't, said Jon.", b'If necessary to meeting the restrictions imposed in the preceding sentence, the Administrator shall reduce, pro rata, the basic Phase II allowance allocations for each unit subject to the requirements of section 414.', b"Second, reducing the rate of HIV transmission is in any event not the only social goal worth If it were, we'd outlaw sex entirely.", b"yes well yeah i am um actually actually i think that i at the higher level education i don't think there's so much of a problem there it's pretty much funded well there are small colleges that i'm sure are struggling", b'The most comfortable way to see these important Hoysala temples is to visit them on either side of an overnight stay at Hassan, 120 km (75 miles) northwest of Mysore.', b'We saw a whole new model develop - a holistic approach to lawyering, one-stop shopping, she said. ', b"Click here for Finkelstein's explanation of why this logic is expedient.", b'In Hong Kong you can have a plate, or even a whole dinner service, hand-painted to your own design.', b"of course you could annex Cuba but they wouldn't like that a bit", b'She hardly needs to mention it--the media bring it up anyway--but she invokes it subtly, alluding (as she did on two Sunday talk shows) to women who drive their daughters halfway across the state to shake my hand, a woman they dare to believe in.', b'First, get the basics right, that is, the blocking and tackling of financial reporting.', b'STANDARD COSTING - A costing method that attaches costs to cost objects based on reasonable estimates or cost studies and by means of budgeted rates rather than according to actual costs incurred.', b'NEH-supported exhibitions were distinguished by their elaborate wall panels--educational maps, photomurals, stenciled treatises--which competed with the objects themselves for space and attention.', b'More reserved and remote but a better administrator and financier than his uncle, Charles Brooke imposed on his men his own austere, efficient style of life.', b'Also beyond city limits is the Legacy Golf Club in the nearby suburb of Henderson.', b'year, they gave morethan a half million dollars to Western Michigan Legal Services.', b"'Would you like some tea?'", b'On a December day in 1917, British General Allenby rode up to Jaffa Gate and dismounted from his horse because he would not ride where Jesus walked; he then accepted the surrender of the city after the Ottoman Turks had fled (the flag of surrender was a bed-sheet from the American Colony Hotel).', b'Daniel took it upon himself to explain a few things.', b'yep same here', b"because then they'll or you have a prescription", b"well it's a pleasure talking with you", b'By seeding packs with a few high-value cards, the manufacturer is encouraging kids to buy Pokemon cards like lottery tickets.', b"He reported masterfully on the '72 campaign and the Hell's Angels.", b'The final rule was determined to be an economically significant regulatory action by the Office of Management and Budget and was approved by OMB as complying with the requirements of the Order on March 26, 1998.', b"well Dana it's been really interesting and i appreciate talking with you", b'The dramatic cliffs of the Serra de Tramuntana mountain range hug the coastline of the entire northwest and north, from Andratx all the way to the Cape of Formentor.', b'Keep young skins safe by covering them with sunblock or a T-shirt, even when in the water.', b'In the short term, U.S. consumers will benefit from cheap imports (as will U.S. multinationals that use parts made in East Asian factories).'], dtype=object)>} ``` and example1 = list(bert_train_dataset.__iter__())[0] example1 ``` ({'attention_mask': <tf.Tensor: id=4915606, shape=(32, 128), dtype=int32, numpy= array([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]], dtype=int32)>, 'input_ids': <tf.Tensor: id=4915607, shape=(32, 128), dtype=int32, numpy= array([[ 101, 1105, 1128, ..., 0, 0, 0], [ 101, 1448, 2265, ..., 0, 0, 0], [ 101, 17037, 20564, ..., 0, 0, 0], ..., [ 101, 178, 1274, ..., 0, 0, 0], [ 101, 6249, 1107, ..., 0, 0, 0], [ 101, 146, 1354, ..., 0, 0, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: id=4915608, shape=(32, 128), dtype=int32, numpy= array([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], dtype=int32)>}, <tf.Tensor: id=4915609, shape=(32,), dtype=int64, numpy= array([1, 0, 1, 1, 0, 1, 0, 0, 2, 0, 0, 0, 2, 2, 2, 1, 0, 1, 2, 0, 2, 2, 0, 1, 0, 1, 1, 2, 1, 1, 2, 2])>) ```<|||||>In the above seems that ```bert_validation_matched_dataset``` 's format is wrong. I would expect to be similar to ```bert_train_dataset```. ``` bert_validation_matched_dataset``` is produced with the following code: bert_validation_matched_dataset = glue_convert_examples_to_features(validation_matched_dataset, bert_tokenizer, 128, 'mnli', label_list=['0', '1', '2']) bert_validation_matched_dataset = validation_matched_dataset.batch(64) Any idea why that didn't work?<|||||>OK, I found out. bert_validation_matched_dataset = glue_convert_examples_to_features(validation_matched_dataset, bert_tokenizer, 128, 'mnli', label_list=['0', '1', '2']) bert_validation_matched_dataset = **validation_matched_dataset**.batch(64) I have to write bert_validation_matched_dataset there
transformers
1,703
closed
Add `model.train()` line to ReadMe training example
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I added `model.train()` line to ReadMe training example, at this section https://github.com/huggingface/transformers#optimizers-bertadam--openaiadam-are-now-adamw-schedules-are-standard-pytorch-schedules ``` for batch in train_data: model.train() loss = model(batch) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # Gradient clipping is not in AdamW anymore (so you can use amp without issue) optimizer.step() scheduler.step() optimizer.zero_grad() ``` In the API [ https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained ] it says >The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated) To train the model, you should first set it back in training mode with model.train() And I see this is the case for every example in the 'examples' folder, such as this one https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L227 ``` for step, batch in enumerate(epoch_iterator): inputs, labels = mask_tokens(batch, tokenizer, args) if args.mlm else (batch, batch) inputs = inputs.to(args.device) labels = labels.to(args.device) model.train() outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) loss = outputs[0] # model outputs are always tuple in transformers (see doc) ``` There is a `model.train()` in each training loop, before the loss/output line. Some may develop their training loops based on the readme example, inadvertently deactivating that dropout layers during training.
11-02-2019 18:31:35
11-02-2019 18:31:35
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1703?src=pr&el=h1) Report > Merging [#1703](https://codecov.io/gh/huggingface/transformers/pull/1703?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a62835577a2a93642546858b21372e43c1a1ff8?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1703/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1703?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1703 +/- ## ======================================= Coverage 85.14% 85.14% ======================================= Files 94 94 Lines 13920 13920 ======================================= Hits 11852 11852 Misses 2068 2068 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1703?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1703?src=pr&el=footer). Last update [8a62835...7a8f3ca](https://codecov.io/gh/huggingface/transformers/pull/1703?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Instead of merging 5 commits I pushed a commit on master 68f7064 of which you are an author.
transformers
1,702
closed
Interpretation of output from fine-tuned BERT for Masked Language Modeling.
## ❓ Questions & Help Hi, I need some help with interpretation of the output from a fine-tuned bert model. Here is what I have done so far: I used `run_lm_finetuning.py` to fine-tune 'bert-base-uncased' model on my domain specific text data. I fine-tuned the model for 3 epochs, and I get the following files after fine-tuning the process: - config.json - pytorch_model.bin - training_args.bin along with some other files such as : - special_tokens_map.json - tokenizer_config.json - vocab.txt I am loading this fine-tuned model using the following code: ```` config_class, model_class, tokenizer_class = BertConfig, BertForMaskedLM, BertTokenizer config = config_class.from_pretrained('path/to/config/dir/config.json') self.text_feature_extractor = model_class.from_pretrained('path/to/fine-tuned-model/pytorch_model.bin', from_tf=bool('.ckpt' in 'path/to/fine-tuned-model/pytorch_model.bin'), config=config) ```` And get output from the model like this: `raw_out_q = self.text_feature_extractor(q) ` where q is a padded batch of shape B, SEQ_L tokenized by pre-trained BertTokenizer with the following line of code: `sentence_indices = self.tokenizer.encode(sentence, add_special_tokens=True)` The output from the fine-tuned model is a tuple of 2 objects: First one is of shape `(B, SEQ_L, 30522)` The second object is however another tuple of 13 tensors: Each tensor is of shape` (B, SEQ_L, 768) ` Now here are my questions: How should I interpret this output coming from the model? For the first tensor, I know vocab_size for bert_base_uncased was 30522, but what does it represent here? For the second tuple, do these 13 tensors represent output from each bert layer? For each sentence, I want a sentence-level feature and word-level features. I am aware that sentence-level feature is usually the first position in the sequence (against the "[CLS]" token), but any clarification will be super helpful and much appreciated. Thanks! <!-- A clear and concise description of the question. -->
11-02-2019 04:11:46
11-02-2019 04:11:46
My bad! Should have read the documentation for BertForMaskedLM code: ``` Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs: **masked_lm_loss**: (`optional`, returned when ``masked_lm_labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``: Masked language modeling loss. **ltr_lm_loss**: (`optional`, returned when ``lm_labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``: Next token prediction loss. **prediction_scores**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, config.vocab_size)`` Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``) list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings) of shape ``(batch_size, sequence_length, hidden_size)``: Hidden-states of the model at the output of each layer plus the initial embedding outputs. **attentions**: (`optional`, returned when ``config.output_attentions=True``) list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``: Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. ``` The second tuple of tensors is the hidden_states from each layer + initial embedding layer. (num_hidden_layers=12) in config file. Can someone please confirm the layer order? Should I extract output from top or bottom? I have seen code examples using features from last 4 layers by indices like -1, -2, -3, -4, but just want to sure that it is what I should be doing as well. Thanks!<|||||>Okay. I understand the output now and selecting only the last hidden representation, but my gpu get filled if I set config.output_hidden_states=True for the fine-tuned LM model (as it returns the output from all hidden layers). How can I make it to return output from the selected layers only?<|||||>Hi, thank you for sharing. Just want to ensure that the **last hidden state** could be derived by querying from indices = -1? I am super confused that there are 13 hidden states, I do not know whether the first one or last one is the last hidden states. Hope for your reply. Thanks!<|||||>Yeah, I have the same confusion too. But considering the convention people used in other examples and/or issues, I am considering index = -1 belongs to the last hidden layer. Anyone is welcome to correct me if I am wrong. <|||||>@aurooj, @mingbocui you're both right: the last hidden state corresponds to the index -1. The index 0 corresponds to the output of the embedding layer, the index 1 corresponds to the output of the first layer, etc.<|||||>@LysandreJik Thanks for your reply. Can you please help me with this? > Okay. I understand the output now and selecting only the last hidden representation, but my gpu get filled if I set config.output_hidden_states=True for the fine-tuned LM model (as it returns the output from all hidden layers). How can I make it to return output from the selected layers only? <|||||>We currently have no way of returning the output of the selected layers only. You would have to manually re-implement the forward method of BERT and output only the layers you're interested in. If you're looking to fine-tune BERT with the `bert-base-uncased` checkpoint and are concerned about the memory usage, maybe you could use DistilBERT with the `distilbert-base-uncased` checkpoint, which is based on BERT but has a much lower memory usage.<|||||>Oh okay. Good to have some pointers to get what I want. Thanks!<|||||>@LysandreJik using the same approach as @aurooj mentioned, I'm getting tuple of 1 length, specifically this one > First one is of shape (B, SEQ_L, 30522) Also, there are so many checkpoint-### folders created (starting from 500 to 8500). Is it the expected behavior? <|||||>I'm not sure I understand your first question. You can manage when the checkpoints are saved using the `--save_steps` argument. You can check the full list of arguments in the script (now renamed `run_language_modeling`) available [here](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py#L478-L613).<|||||>Looking at the code was extremely helpful, I was able to continue fine-tuning from the saved checkpoints and understanding the arguments, though complete documentation of it would be ideal. About my first question, I did the same things @aurooj mentioned, but instead of 2 length tuple, I got 1 length tuple from `BertForMaskedLM` of shape `(B, SEQ_L, 30522)`<|||||>You can show the arguments of that script alongside their documentation by running `python run_language_modeling.py --help`. Concerning what was returned by `BertForMaksedLM`, that seems normal as this is what is mentioned in the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm). The output tuple can contain up to 5 items, 4 of which are optional.<|||||>My bad, even after reading the documentation. somehow it got in my mind that 3 were optional not 4. Thank you. Note to self, > Reading documentation is paramount too.<|||||>Hey @aurooj @nauman-chaudhary - I am very interested to see how you have done the fine-tuning for the masked LM task. Would really appreciate if you could share some insights. Thank you very much in advance!
transformers
1,701
closed
When I uesd gpt2, I got a error
## ❓ Questions & Help <!-- A clear and concise description of the question. --> ` File "test.py", line 92, in <module> tokenizer = GPT2Tokenizer.from_pretrained('gpt2') File "/home/ruiy/anaconda3/envs/pytorch101/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 282, in from_pretrained return cls._from_pretrained(*inputs, **kwargs) File "/home/ruiy/anaconda3/envs/pytorch101/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 411, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/ruiy/anaconda3/envs/pytorch101/lib/python3.6/site-packages/transformers/tokenization_gpt2.py", line 122, in __init__ self.encoder = json.load(open(vocab_file, encoding="utf-8")) File "/home/ruiy/anaconda3/envs/pytorch101/lib/python3.6/json/__init__.py", line 299, in load parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File "/home/ruiy/anaconda3/envs/pytorch101/lib/python3.6/json/__init__.py", line 354, in loads return _default_decoder.decode(s) File "/home/ruiy/anaconda3/envs/pytorch101/lib/python3.6/json/decoder.py", line 339, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/home/ruiy/anaconda3/envs/pytorch101/lib/python3.6/json/decoder.py", line 355, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 295508 (char 295507)`
11-02-2019 03:05:05
11-02-2019 03:05:05
We had a temporary issue with our `gpt2` model, should be fixed now.<|||||>I reinstalled `transformers` (`pip install transformers`), and the error remains.<|||||>> We had a temporary issue with our `gpt2` model, should be fixed now. I reinstalled transformers (pip install transformers), and the error remains. <|||||>> We had a temporary issue with our `gpt2` model, should be fixed now. ??<|||||>use `force_download` option
transformers
1,700
closed
Best practices for Bert passage similarity. Perhaps further processing Bert vectors; Bert model inside another model
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I'm trying develop Bert passage similarity, specifically question/ answer-retrieval. The architecture is pooling bert contextualized embeddings for passages of text, and then doing cos similarity. Basically the architecture is as it's described here https://github.com/re-search/DocProduct#architecture Unless I skipped over something, as far as I can tell, there isn't an established way to do this in the Transformer model. It looks like I have to take the Bert output, and do further processing on it. So in this case, Bert would be a component of a larger model. This is the best I could come up with up some prototype code. ``` !pip install transformers import torch from transformers import BertModel, BertTokenizer from transformers import AdamW import torch.nn as nn import torch.nn.functional as F # Overall model, with Bert model inside it class OverallModel(nn.Module): def __init__(self): super(OverallModel, self).__init__() self.tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') self.bertModel = BertModel.from_pretrained('bert-base-uncased') self.lossFunction = nn.BCEWithLogitsLoss( reduction = 'none' ) def forward(self, queries, docs, targets ): #Set bert model to train mode self.bertModel.train() #tokenize queries and pad to maximum length queriesT = [self.tokenizer.encode(piece, add_special_tokens=True) for piece in queries] widthQ = max(len(d) for d in queriesT) queriesP = [d + [0] * (widthQ - len(d)) for d in queriesT] queriesTen= torch.tensor(queriesP) #tokenize answers and pad to maximum length docsT = [ self.tokenizer.encode(piece, add_special_tokens=True) for piece in docs] widthD = max(len(d) for d in docsT) docsP = [d + [0] * (widthD - len(d)) for d in docsT] docsTen= torch.tensor(docsP) #Get contextualized Bert embeddings bertEmbedsQ = self.bertModel(queriesTen)[0] bertEmbedsA = self.bertModel(docsTen)[0] #Mean pool embeddings for each text, and normalize to unit vectors bertEmbedsQPooled = torch.mean(bertEmbedsQ, 1) bertEmbedsAPooled = torch.mean(bertEmbedsA, 1) bertEmbedsQPN = F.normalize(bertEmbedsQPooled, p=2, dim=1) bertEmbedsAPN = F.normalize(bertEmbedsAPooled, p=2, dim=1) #Take dot products of query and answer embeddings, calculate loss with target. dotProds = torch.bmm(bertEmbedsQPN.unsqueeze(1), bertEmbedsAPN.unsqueeze(2) ).squeeze() indLoss = self.lossFunction( dotProds, targets ) finalLoss = torch.mean(indLoss) return finalLoss QAmodel = OverallModel() optimizer = AdamW(QAmodel.parameters()) optimizer.zero_grad() #Prepare inputs and target queriesI = ['What do cats eat', 'where do apples come from', 'Who is Mr Rogers', 'What are kites'] docsI = ['they eat catfood', 'from trees' , 'President of Iceland', 'used for diving'] targets = torch.tensor( [1, 1, 0 ,0] , dtype = torch.float32 ) #Caculate loss, gradients, and update weights loss = QAmodel.forward(queriesI, docsI, targets) loss.backward() optimizer.step() ``` This approach seems to be working well. The gradients are flowing all the way through, the weights are updating, etc. Is this code the best way to train Bert passage similarity.
11-02-2019 02:54:49
11-02-2019 02:54:49
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,699
closed
Why do I get 13 hidden layers?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi, When I try to type ``` model = TFRobertaForSequenceClassification.from_pretrained('roberta-base', output_hidden_states=True) hidden_layers = model([input]) ``` The length of hidden_layers is 13. But base model should have 12 layers, right? And if I want to get the last hidden layer, should I use "hidden_layers[-1]"? Thanks in advance!
11-02-2019 02:47:08
11-02-2019 02:47:08
The first element is the embedding output. Yes, use `hidden_layers[-1]`.
transformers
1,698
closed
add_tokens() leading to wrong behavior
Hey folks, I am using GPT2Tokenizer and tried add_tokens() below. But it gives unintended behavior and this is because of #612 in tokenization_utils.py sub_text = sub_text.strip() As you see, dollars is split because of space removal. Could someone look into it please. ``` t1 = GPT2Tokenizer.from_pretrained('gpt2') t1.tokenize('my value in dignums dollars') ``` Output - ['my', 'Ġvalue', 'Ġin', 'Ġdign', 'ums', '**Ġdollars**'] ``` t2 = GPT2Tokenizer.from_pretrained('gpt2') t2.add_tokens(['dignums']) t2.tokenize('my value in dignums dollars') ``` Output - ['my', 'Ġvalue', 'Ġin', 'dignums', **'d', 'oll', 'ars'**]
11-02-2019 01:11:33
11-02-2019 01:11:33
I guess that's why add_prefix_space=True would be needed here.
transformers
1,697
closed
PPLM (squashed)
Update: not the case anymore as #1695 was merged. ~~This also contains https://github.com/huggingface/transformers/pull/1695 to make it easy to test in a stand-alone way, but those changes won't be in the final commit~~
11-01-2019 20:36:31
11-01-2019 20:36:31
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1697?src=pr&el=h1) Report > Merging [#1697](https://codecov.io/gh/huggingface/transformers/pull/1697?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e85855f2c408f65a4aaf5d15baab6ca90fd26050?src=pr&el=desc) will **increase** coverage by `0.03%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1697/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1697?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1697 +/- ## ========================================= + Coverage 83.97% 84% +0.03% ========================================= Files 105 97 -8 Lines 15570 14340 -1230 ========================================= - Hits 13075 12047 -1028 + Misses 2495 2293 -202 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1697?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1697/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (-2.19%)` | :arrow_down: | | [transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1697/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `89.64% <0%> (-1.22%)` | :arrow_down: | | [transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1697/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `59.45% <0%> (-0.55%)` | :arrow_down: | | [transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1697/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2JlcnQucHk=) | `96.08% <0%> (-0.24%)` | :arrow_down: | | [transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1697/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fdXRpbHMucHk=) | `92.1% <0%> (-0.11%)` | :arrow_down: | | [transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1697/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.68% <0%> (-0.05%)` | :arrow_down: | | [transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1697/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.46% <0%> (-0.02%)` | :arrow_down: | | [transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1697/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZGlzdGlsYmVydC5weQ==) | `89.74% <0%> (ø)` | :arrow_up: | | [transformers/tests/tokenization\_tests\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1697/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <0%> (ø)` | :arrow_up: | | [transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1697/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.87% <0%> (ø)` | :arrow_up: | | ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/1697/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1697?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1697?src=pr&el=footer). Last update [e85855f...226d137](https://codecov.io/gh/huggingface/transformers/pull/1697?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
1,696
closed
Invalid argument error with TFRoberta on GLUE
## 🐛 Bug I am unable to get TFRoberta working on the GLUE benchmark. Model I am using (Bert, XLNet....): TFRoberta Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [x] my own modified scripts: (give details) The tasks I am working on is: * [x] an official GLUE/SQUaD task: *MRPC* * [ ] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. Copy the following script into a file name `run_glue_task_tf.py`: ```python import tensorflow as tf import tensorflow.keras as keras import tensorflow_datasets.public_api as tfds import transformers from absl import app from absl import flags from absl import logging FLAGS = flags.FLAGS flags.DEFINE_multi_string("tasks", None, "One or more tasks to be used for pretraining") flags.DEFINE_integer('num_epochs', 3, 'Number of epochs to train') flags.DEFINE_integer('batch_size', 32, 'Batch size to use for training') flags.DEFINE_integer('eval_batch_size', 64, 'Batch size to use when evaluating validation/test sets') flags.DEFINE_boolean('use_xla', False, 'Enable XLA optimization') flags.DEFINE_boolean('use_amp', False, 'Enable AMP optimization') flags.DEFINE_integer('max_seq_len', 128, 'Maximum sequence length') flags.DEFINE_string('model_name', 'bert-base-cased', 'Name of pretrained transformer model to load') def configure_tf(): logging.info(('Enabling' if FLAGS.use_xla else 'Disabling') + ' XLA optimization') tf.config.optimizer.set_jit(FLAGS.use_xla) logging.info(('Enabling' if FLAGS.use_xla else 'Disabling') + ' auto mixed precision (AMP)') tf.config.optimizer.set_experimental_options({'auto_mixed_precision': FLAGS.use_amp}) def get_model(): logging.info('Loading pre-trained TF model from %s', FLAGS.model_name) model = transformers.TFAutoModelForSequenceClassification.from_pretrained(FLAGS.model_name) opt = keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08) if FLAGS.use_amp: logging.debug('Enabling loss scaling') opt = keras.mixed_precision.experimental.LossScaleOptimizer(opt, 'dynamic') loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=opt, loss=loss, metrics=[metric]) return model def run_task(task: str, model: keras.Model, tokenizer): data, info = tfds.load(task, with_info=True) if task.startswith('glue'): glue_task = task[len('glue/'):] def load_features(split: tfds.core.splits.NamedSplit): logging.debug('Converting %s.%s to features', task, split) is_xlnet: bool = 'xlnet' in model.name.lower() return transformers.glue_convert_examples_to_features(examples=data[split], tokenizer=tokenizer, max_length=FLAGS.max_seq_len, output_mode=transformers.glue_output_modes[glue_task], pad_on_left=is_xlnet, # Pad on the left for XLNet pad_token=tokenizer.convert_tokens_to_ids( [tokenizer.pad_token])[0], pad_token_segment_id=4 if is_xlnet else 0, task=glue_task) train = load_features(tfds.Split.TRAIN) valid = load_features(tfds.Split.VALIDATION) else: raise ValueError('Unsupported task: %s' % task) train = train.shuffle(128).batch(FLAGS.batch_size).repeat(FLAGS.num_epochs) valid = valid.batch(FLAGS.eval_batch_size).take(FLAGS.eval_batch_size) logging.info('Training %s on %s...', model.name, task) history = model.fit(x=train, validation_data=valid) print(task, 'Performance:') print(history) def main(argv): del argv # Unused. logging.debug('Loading tokenizer from %s', FLAGS.model_name) tokenizer = transformers.AutoTokenizer.from_pretrained(FLAGS.model_name) model = get_model() for task in FLAGS.tasks: print('-' * 20, task, '-' * 20) run_task(task, model, tokenizer) if __name__ == '__main__': app.run(main) ``` 2. Run the following command: ```bash $ python run_glue_task_tf.py --tasks glue/mrpc --model_name roberta-base ``` 3. Experience the following error: ``` I1101 14:04:14.134916 46912496418432 run_pretraining.py:76] Training tf_roberta_for_sequence_classification on glue/mrpc... /data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " WARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification/roberta/pooler/dense/bias:0'] when minimizing the loss. W1101 14:04:25.996359 46912496418432 optimizer_v2.py:1029] Gradients do not exist for variables ['tf_roberta_for_sequence_classification/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification/roberta/pooler/dense/bias:0'] when minimizing the loss. /data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " WARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification/roberta/pooler/dense/bias:0'] when minimizing the loss. W1101 14:04:36.157998 46912496418432 optimizer_v2.py:1029] Gradients do not exist for variables ['tf_roberta_for_sequence_classification/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification/roberta/pooler/dense/bias:0'] when minimizing the loss. 2019-11-01 14:04:45.689286: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Invalid argument: indices[28,20] = 1 is not in [0, 1) [[{{node tf_roberta_for_sequence_classification/roberta/embeddings/token_type_embeddings/embedding_lookup}}]] 2019-11-01 14:04:45.900194: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Invalid argument: indices[28,20] = 1 is not in [0, 1) [[{{node tf_roberta_for_sequence_classification/roberta/embeddings/token_type_embeddings/embedding_lookup}}]] [[Shape/_8]] 1/Unknown - 28s 28s/stepTraceback (most recent call last): File "run_pretraining.py", line 96, in <module> app.run(main) File "/data/conda/envs/transformers/lib/python3.6/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/data/conda/envs/transformers/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "run_pretraining.py", line 92, in main run_task(task, model, tokenizer) File "run_pretraining.py", line 77, in run_task history = model.fit(x=train, validation_data=valid) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 728, in fit use_multiprocessing=use_multiprocessing) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 324, in fit total_epochs=epochs) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 123, in run_one_epoch batch_outs = execution_function(iterator) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 86, in execution_function distributed_function(input_fn)) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 457, in __call__ result = self._call(*args, **kwds) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 520, in _call return self._stateless_fn(*args, **kwds) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1823, in __call__ return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1141, in _filtered_call self.captured_inputs) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1224, in _call_flat ctx, args, cancellation_manager=cancellation_manager) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 511, in call ctx=ctx) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[28,20] = 1 is not in [0, 1) [[node tf_roberta_for_sequence_classification/roberta/embeddings/token_type_embeddings/embedding_lookup (defined at /data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1751) ]] [Op:__inference_distributed_function_58202] Function call stack: distributed_function 2019-11-01 14:04:47.519198: W tensorflow/core/kernels/data/generator_dataset_op.cc:102] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated. [[{{node PyFunc}}]] ``` ## Expected behavior The script should train on GLUE when using TFRoberta just as it trains when using TFBert. ## Environment OS: CentOS-7 Python version: 3.6.9 PyTorch version: 1.2.0 PyTorch Transformers version (or branch): 2.1.1 (master from git) Using GPU ? Yes Distributed of parallel setup ? No Any other relevant information: ## Additional context
11-01-2019 18:08:12
11-01-2019 18:08:12
I have encountered this embedding error and I found that wrapping the model in a tf.distribute strategy fixes it (extremely odd...) You can wrap the model in `tf.distribute.OneDeviceStrategy` and it will work.<|||||>Did you ever figure it out? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,695
closed
model forwards can take an inputs_embeds param
11-01-2019 16:29:21
11-01-2019 16:29:21
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1695?src=pr&el=h1) Report > Merging [#1695](https://codecov.io/gh/huggingface/transformers/pull/1695?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68f7064a3ea979cdbdadfed62ad655eac4c53463?src=pr&el=desc) will **increase** coverage by `0.07%`. > The diff coverage is `98.17%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1695/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1695?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1695 +/- ## ========================================== + Coverage 83.95% 84.03% +0.07% ========================================== Files 94 94 Lines 13951 14021 +70 ========================================== + Hits 11713 11782 +69 - Misses 2238 2239 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1695?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1695/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2Rpc3RpbGJlcnQucHk=) | `98.59% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1695/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2dwdDIucHk=) | `94.79% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1695/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2JlcnQucHk=) | `96.6% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1695/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `87.82% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1695/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX29wZW5haS5weQ==) | `96.04% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1695/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.4% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1695/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2N0cmwucHk=) | `97.75% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1695/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGwucHk=) | `92.21% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1695/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbS5weQ==) | `90.39% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1695/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `89.9% <ø> (ø)` | :arrow_up: | | ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/1695/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1695?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1695?src=pr&el=footer). Last update [68f7064...00337e9](https://codecov.io/gh/huggingface/transformers/pull/1695?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>LGTM. Feel free to add the TF version or merge if you don't want to add them now.
transformers
1,694
closed
solves several bugs in the summarization codes
Hi, This pull requests solves several small bugs in this summarization script: - call the evaluation code - fix the iterating over batch in eval part - create the folders before saving encoder/decoder to avoid the crash - first creates output_dir, then use it during the saving of the model part. - tokenizer.add_special_tokens_single_sequence does not exist anymore, changed to tokenizer.build_inputs_with_special_tokens - fix the empty checkpoints in the evaluation - add evaluate=False in load_and_cache_examples functions, to fix the issue for now, but dataset needs to be split in train/validation - add missing argument of per_gpu_eval_batch_size the code needs more features to be added/bugs to be resolved, this just solves the obvious existing bugs, please see my opened bug request #1674 from yesterday on remaining issues.
11-01-2019 16:16:30
11-01-2019 16:16:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1694?src=pr&el=h1) Report > Merging [#1694](https://codecov.io/gh/huggingface/transformers/pull/1694?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/93d2fff0716d83df168ca0686d16bc4cd7ccb366?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1694/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1694?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1694 +/- ## ======================================= Coverage 85.14% 85.14% ======================================= Files 94 94 Lines 13920 13920 ======================================= Hits 11852 11852 Misses 2068 2068 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1694?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1694?src=pr&el=footer). Last update [93d2fff...5b50eec](https://codecov.io/gh/huggingface/transformers/pull/1694?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thank you for your PR. As I said in the issue, we’re still working on the summarization; you can follow the changes in the ˋexample-summarization` branch.
transformers
1,693
closed
TFXLNet Incompatible shapes in relative attention
## 🐛 Bug TFXLNet fails to run due to incompatible shapes when computing relative attention. Model I am using (Bert, XLNet....): TFXLNet Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [x] my own modified scripts: (give details) The tasks I am working on is: * [x] an official GLUE/SQUaD task: *MRPC* * [ ] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. Fix line 542 in `transformers/modeling_tf_xlnet.py` to address #1692: ```python input_mask = 1.0 - tf.cast(attention_mask, dtype=dtype_float) ``` 2. Run `run_tf_glue_xlnet.py` script: ```python import os import tensorflow as tf import tensorflow_datasets from transformers import XLNetForSequenceClassification, TFXLNetForSequenceClassification, glue_convert_examples_to_features, XLNetTokenizer # script parameters BATCH_SIZE = 32 EVAL_BATCH_SIZE = BATCH_SIZE * 2 USE_XLA = False USE_AMP = False # tf.config.optimizer.set_jit(USE_XLA) # tf.config.optimizer.set_experimental_options({"auto_mixed_precision": USE_AMP}) # Load tokenizer and model from pretrained model/vocabulary tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased') model = TFXLNetForSequenceClassification.from_pretrained('xlnet-base-cased') # Load dataset via TensorFlow Datasets data, info = tensorflow_datasets.load('glue/mrpc', with_info=True) train_examples = info.splits['train'].num_examples valid_examples = info.splits['validation'].num_examples # Prepare dataset for GLUE as a tf.data.Dataset instance train_dataset = glue_convert_examples_to_features( data['train'], tokenizer, max_length=512, output_mode="classification", task='mrpc', pad_on_left=True, # pad on the left for xlnet pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0], pad_token_segment_id=4 ) valid_dataset = glue_convert_examples_to_features( data['validation'], tokenizer, max_length=512, output_mode="classification", task='mrpc', pad_on_left=True, # pad on the left for xlnet pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0], pad_token_segment_id=4 ) train_dataset = train_dataset.shuffle(128).batch(BATCH_SIZE).repeat(-1) valid_dataset = valid_dataset.batch(EVAL_BATCH_SIZE) # Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule opt = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08) if USE_AMP: # loss scaling is currently required when using mixed precision opt = tf.keras.mixed_precision.experimental.LossScaleOptimizer(opt, 'dynamic') loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=opt, loss=loss, metrics=[metric]) # Train and evaluate using tf.keras.Model.fit() train_steps = train_examples//BATCH_SIZE valid_steps = valid_examples//EVAL_BATCH_SIZE history = model.fit(train_dataset, epochs=2, steps_per_epoch=train_steps, validation_data=valid_dataset, validation_steps=valid_steps) # Save TF2 model os.makedirs('./save/', exist_ok=True) model.save_pretrained('./save/') # Load the TensorFlow model in PyTorch for inspection pytorch_model = XLNetForSequenceClassification.from_pretrained('./save/', from_tf=True) # Quickly test a few predictions - MRPC is a paraphrasing task, let's see if our model learned the task sentence_0 = 'This research was consistent with his findings.' sentence_1 = 'His findings were compatible with this research.' sentence_2 = 'His findings were not compatible with this research.' inputs_1 = tokenizer.encode_plus(sentence_0, sentence_1, add_special_tokens=True, return_tensors='pt') inputs_2 = tokenizer.encode_plus(sentence_0, sentence_2, add_special_tokens=True, return_tensors='pt') pred_1 = pytorch_model(**inputs_1)[0].argmax().item() pred_2 = pytorch_model(**inputs_2)[0].argmax().item() print('sentence_1 is', 'a paraphrase' if pred_1 else 'not a paraphrase', 'of sentence_0') print('sentence_2 is', 'a paraphrase' if pred_2 else 'not a paraphrase', 'of sentence_0') ``` 3. See the following error message: ``` 2019-11-01 11:57:45.714479: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Invalid argument: Incompatible shapes: [512,512,32,12] vs. [512,1023,32,12] [[{{node tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/add_3}}]] 1/114 [..............................] - ETA: 1:02:58Traceback (most recent call last): File "run_tf_glue_xlnet.py", line 63, in <module> validation_data=valid_dataset, validation_steps=valid_steps) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 728, in fit use_multiprocessing=use_multiprocessing) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 324, in fit total_epochs=epochs) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 123, in run_one_epoch batch_outs = execution_function(iterator) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 86, in execution_function distributed_function(input_fn)) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 457, in __call__ result = self._call(*args, **kwds) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 520, in _call return self._stateless_fn(*args, **kwds) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1823, in __call__ return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1141, in _filtered_call self.captured_inputs) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1224, in _call_flat ctx, args, cancellation_manager=cancellation_manager) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 511, in call ctx=ctx) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [512,512,32,12] vs. [512,1023,32,12] [[node tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/add_3 (defined at /data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1751) ]] [Op:__inference_distributed_function_67728] ``` ## Expected behavior The script should run just as `run_tf_glue.py` ## Environment OS: CentOS-7 Python version: 3.6.9 PyTorch version: 1.2.0 PyTorch Transformers version (or branch): 2.1.1 (master from git) Using GPU ? Yes Distributed of parallel setup ? No Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
11-01-2019 16:10:45
11-01-2019 16:10:45
I have a fix for this in #1763 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,692
closed
TFXLNet int32 to float promotion error
## 🐛 Bug Using TFXLNet on GLUE datasets results in a TypeError when computing the input_mask because the attention_mask is represented as an int32 and is not automatically cast or promoted to a float. Model I am using (Bert, XLNet....): TFXLNet Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [x] my own modified scripts: (give details) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. Use the attached run_tf_glue_xlnet.py script ``` Traceback (most recent call last): File "run_tf_glue_xlnet.py", line 63, in <module> validation_data=valid_dataset, validation_steps=valid_steps) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 728, in fit use_multiprocessing=use_multiprocessing) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 224, in fit distribution_strategy=strategy) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 547, in _process_training_inputs use_multiprocessing=use_multiprocessing) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 594, in _process_inputs steps=steps) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2419, in _standardize_user_data all_inputs, y_input, dict_inputs = self._build_model_with_inputs(x, y) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2622, in _build_model_with_inputs self._set_inputs(cast_inputs) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2709, in _set_inputs outputs = self(inputs, **kwargs) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 842, in __call__ outputs = call_fn(cast_inputs, *args, **kwargs) File "/data/conda/envs/transformers/lib/python3.6/site-packages/tensorflow_core/python/autograph/impl/api.py", line 237, in wrapper raise e.ag_error_metadata.to_exception(e) TypeError: in converted code: relative to /data/conda/envs/transformers/lib/python3.6/site-packages: transformers/modeling_tf_xlnet.py:907 call * transformer_outputs = self.transformer(inputs, **kwargs) tensorflow_core/python/keras/engine/base_layer.py:842 __call__ outputs = call_fn(cast_inputs, *args, **kwargs) transformers/modeling_tf_xlnet.py:542 call * input_mask = 1.0 - attention_mask tensorflow_core/python/ops/math_ops.py:924 r_binary_op_wrapper x = ops.convert_to_tensor(x, dtype=y.dtype.base_dtype, name="x") tensorflow_core/python/framework/ops.py:1184 convert_to_tensor return convert_to_tensor_v2(value, dtype, preferred_dtype, name) tensorflow_core/python/framework/ops.py:1242 convert_to_tensor_v2 as_ref=False) tensorflow_core/python/framework/ops.py:1296 internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) tensorflow_core/python/framework/tensor_conversion_registry.py:52 _default_conversion_function return constant_op.constant(value, dtype, name=name) tensorflow_core/python/framework/constant_op.py:227 constant allow_broadcast=True) tensorflow_core/python/framework/constant_op.py:265 _constant_impl allow_broadcast=allow_broadcast)) tensorflow_core/python/framework/tensor_util.py:449 make_tensor_proto _AssertCompatible(values, dtype) tensorflow_core/python/framework/tensor_util.py:331 _AssertCompatible (dtype.name, repr(mismatch), type(mismatch).__name__)) TypeError: Expected int32, got 1.0 of type 'float' instead. ``` ## Expected behavior The script should run the same as tf_run_glue.py ## Environment * OS: CentOS-7 * Python version: 3.6.9 * PyTorch version: 1.2.0 * PyTorch Transformers version (or branch): 2.1.1 (master from git) * Using GPU ? Yes * Distributed of parallel setup ? No * Any other relevant information: ## Additional context run_tf_glue_xlnet.py: ```python import os import tensorflow as tf import tensorflow_datasets from transformers import XLNetForSequenceClassification, TFXLNetForSequenceClassification, glue_convert_examples_to_features, XLNetTokenizer # script parameters BATCH_SIZE = 32 EVAL_BATCH_SIZE = BATCH_SIZE * 2 USE_XLA = False USE_AMP = False # tf.config.optimizer.set_jit(USE_XLA) # tf.config.optimizer.set_experimental_options({"auto_mixed_precision": USE_AMP}) # Load tokenizer and model from pretrained model/vocabulary tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased') model = TFXLNetForSequenceClassification.from_pretrained('xlnet-base-cased') # Load dataset via TensorFlow Datasets data, info = tensorflow_datasets.load('glue/mrpc', with_info=True) train_examples = info.splits['train'].num_examples valid_examples = info.splits['validation'].num_examples # Prepare dataset for GLUE as a tf.data.Dataset instance train_dataset = glue_convert_examples_to_features( data['train'], tokenizer, max_length=512, output_mode="classification", task='mrpc', pad_on_left=True, # pad on the left for xlnet pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0], pad_token_segment_id=4 ) valid_dataset = glue_convert_examples_to_features( data['validation'], tokenizer, max_length=512, output_mode="classification", task='mrpc', pad_on_left=True, # pad on the left for xlnet pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0], pad_token_segment_id=4 ) train_dataset = train_dataset.shuffle(128).batch(BATCH_SIZE).repeat(-1) valid_dataset = valid_dataset.batch(EVAL_BATCH_SIZE) # Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule opt = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08) if USE_AMP: # loss scaling is currently required when using mixed precision opt = tf.keras.mixed_precision.experimental.LossScaleOptimizer(opt, 'dynamic') loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=opt, loss=loss, metrics=[metric]) # Train and evaluate using tf.keras.Model.fit() train_steps = train_examples//BATCH_SIZE valid_steps = valid_examples//EVAL_BATCH_SIZE history = model.fit(train_dataset, epochs=2, steps_per_epoch=train_steps, validation_data=valid_dataset, validation_steps=valid_steps) # Save TF2 model os.makedirs('./save/', exist_ok=True) model.save_pretrained('./save/') # Load the TensorFlow model in PyTorch for inspection pytorch_model = XLNetForSequenceClassification.from_pretrained('./save/', from_tf=True) # Quickly test a few predictions - MRPC is a paraphrasing task, let's see if our model learned the task sentence_0 = 'This research was consistent with his findings.' sentence_1 = 'His findings were compatible with this research.' sentence_2 = 'His findings were not compatible with this research.' inputs_1 = tokenizer.encode_plus(sentence_0, sentence_1, add_special_tokens=True, return_tensors='pt') inputs_2 = tokenizer.encode_plus(sentence_0, sentence_2, add_special_tokens=True, return_tensors='pt') pred_1 = pytorch_model(**inputs_1)[0].argmax().item() pred_2 = pytorch_model(**inputs_2)[0].argmax().item() print('sentence_1 is', 'a paraphrase' if pred_1 else 'not a paraphrase', 'of sentence_0') print('sentence_2 is', 'a paraphrase' if pred_2 else 'not a paraphrase', 'of sentence_0') ``` [run_tf_glue_xlnet.zip](https://github.com/huggingface/transformers/files/3798525/run_tf_glue_xlnet.zip)
11-01-2019 15:37:32
11-01-2019 15:37:32
Problem can be addressed by updating line 542 in transformers/modeling_tf_xlnet.py to : ```python input_mask = 1.0 - tf.cast(attention_mask, dtype=dtype_float) ```<|||||>The above solution is not present in the current version. Why ? It is still showing the same error as a result.<|||||>@PradyumnaGupta the last version of transformers (v2.3.0) was released the 20th of december. The above solution was merged the 21st of december, and is therefore not in the latest pypi release. Until we release a new version on pypi, feel free to install from source to obtain the most up-to-date bugfixes: ```py pip install git+https://github.com/huggingface/transformers ```<|||||>@LysandreJik i tried !pip install git+https://github.com/huggingface/transformers.git i was trying xlnet,here is the code used for modeling : `def create_model(): q_id = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32) #, dtype=tf.int32 a_id = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32) q_mask = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.float32) a_mask = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.float32) #, dtype=tf.float32 q_atn = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32) a_atn = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32) from_tf=True #config = BertConfig() # print(config) to see settings config = XLNetConfig() config.d_inner = 3072 config.n_head = 12 config.d_model = 768 config.n_layer = 12 config.output_hidden_states = False # Set to True to obtain hidden states # caution: when using e.g. XLNet, XLNetConfig() will automatically use xlnet-large config # normally ".from_pretrained('bert-base-uncased')", but because of no internet, the # pretrained model has been downloaded manually and uploaded to kaggle. #bert_model = TFBertModel.from_pretrained(BERT_PATH+'bert-base-uncased-tf_model.h5', config=config) #bert_model = TFBertModel.from_pretrained('xlnet-base-cased') #bert_model = XLNetModel(config) bert_model = TFXLNetModel.from_pretrained('xlnet-base-cased',config = config) #bert_model = XLNetForMultipleChoice.from_pretrained('xlnet-base-cased') # if config.output_hidden_states = True, obtain hidden states via bert_model(...)[-1] q_embedding = bert_model(q_id, attention_mask=q_mask, token_type_ids=q_atn)[0] a_embedding = bert_model(a_id, attention_mask=a_mask, token_type_ids=a_atn)[0] q = tf.keras.layers.GlobalAveragePooling1D()(q_embedding) a = tf.keras.layers.GlobalAveragePooling1D()(a_embedding) x = tf.keras.layers.Concatenate()([q, a]) x = tf.keras.layers.Dropout(0.2)(x) x = tf.keras.layers.Dense(30, activation='sigmoid')(x) model = tf.keras.models.Model(inputs=[q_id, q_mask, q_atn, a_id, a_mask, a_atn,], outputs=x) return model` i was doing group k fold,so after 1st fold training i get TypeError: Expected int32, got 1.0 of type 'float' instead. how can i solve this issue?
transformers
1,691
closed
ALbert Model implementation is finished on squad qa task. but some format is different with huggingface.(specify on albert qa task)
# 🌟New model addition ## Model description <!-- Important information --> Hey, seems like your community creates the same one branch for Albert, actually, I also reimplement the code and transfer from tf-hub to PyTorch by referencing your other preview convert_tf_xxx.py code. and also reproduce the author's performance on squad 1.1 for the albert-base model. The reproduce is show below Squad 1.1 EM: 79.98, F1 score: 87.98 The paper performance: Squad 1.1 EM: 82.3, F1 score: 89.30 ## Open Source status * [x] the model implementation is available: (give details) * [x] the model weights are available: (give details) ## Additional context <!-- Add any other context about the problem here. --> The code is below https://github.com/pohanchi/huggingface_albert. I just want to ask that I reference your code module from other old model, so what should i do to set my repo public safety.
11-01-2019 15:21:03
11-01-2019 15:21:03
transformers
1,690
closed
T5
# 🌟New model addition [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) ## Open Source status Code and model are open sourced
11-01-2019 15:14:12
11-01-2019 15:14:12
duplicate of #1617