repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
2,391
closed
What dataset was used for the NER results reported in the docs for bert/roberta-large-cased and distilbert-base-uncased models?
## ❓ Questions & Help Regarding [this section in the docs](https://huggingface.co/transformers/examples.html#comparing-bert-large-cased-roberta-large-cased-and-distilbert-base-uncased) and the NER results using bert-large-cased, roberta-large-cased, and distillbert-base-uncased ... **What dataset was used?** When I try them with the GermanEval2014 dataset, the results are inferior to that of the multi-lingual models (which makes sense) ... so my question, upon what dataset(s) were these models trained on that go the most excellent F scores reported in the docs?
01-03-2020 01:11:23
01-03-2020 01:11:23
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,390
closed
Pipelines support
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): I'm using roberta-base model for feature extraction through pipeline functionality. Language I am using the model on English texts. The problem arise when using: * [x ] my own modified scripts: (give details) ``` from transformers import pipeline import torch #torch.set_default_tensor_type('torch.cuda.FloatTensor') nlp = pipeline('feature-extraction', model='roberta-base', tokenizer='roberta-base', device=0) def encode(input): with nlp.device_placement(): return np.array(nlp(input)).mean(axis=1) results = encode(['cybersecurity', 'cyber security', 'agriculture', 'data']) ``` ## To Reproduce Steps to reproduce the behavior: 1. Just run code above. Error details ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-6-f0b27f7cd838> in <module> ----> 1 encode(['cybersecurity', 'cyber security', 'agriculture', 'data']).shape <ipython-input-5-a0628a1cb908> in encode(input) 12 def encode(input): 13 with nlp.device_placement(): ---> 14 return np.array(nlp(input)).mean(axis=1) ~/anaconda3/lib/python3.6/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs) 442 443 def __call__(self, *args, **kwargs): --> 444 return super().__call__(*args, **kwargs).tolist() 445 446 ~/anaconda3/lib/python3.6/site-packages/transformers/pipelines.py in __call__(self, *texts, **kwargs) 402 # Filter out features not available on specific models 403 inputs = self.inputs_for_model(inputs) --> 404 return self._forward(inputs) 405 406 def _forward(self, inputs): ~/anaconda3/lib/python3.6/site-packages/transformers/pipelines.py in _forward(self, inputs) 417 else: 418 with torch.no_grad(): --> 419 predictions = self.model(**inputs)[0].cpu() 420 421 return predictions.numpy() ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask) 733 head_mask = [None] * self.config.num_hidden_layers 734 --> 735 embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds) 736 encoder_outputs = self.encoder(embedding_output, 737 attention_mask=extended_attention_mask, ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 68 token_type_ids=token_type_ids, 69 position_ids=position_ids, ---> 70 inputs_embeds=inputs_embeds) 71 72 ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 184 185 if inputs_embeds is None: --> 186 inputs_embeds = self.word_embeddings(input_ids) 187 position_embeddings = self.position_embeddings(position_ids) 188 token_type_embeddings = self.token_type_embeddings(token_type_ids) ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input) 112 return F.embedding( 113 input, self.weight, self.padding_idx, self.max_norm, --> 114 self.norm_type, self.scale_grad_by_freq, self.sparse) 115 116 def extra_repr(self): ~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1482 # remove once script supports set_grad_enabled 1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1485 1486 RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select ``` ## Expected behavior Sentences encoded properly. ## Environment * OS: Ubuntu 18.01 * Python version: Python 3.6.5 :: Anaconda, Inc. * PyTorch version: 1.3.1 * PyTorch Transformers version (or branch): 2.3.0 and master * Using GPU ? yes * Distributed of parallel setup ? No * Any other relevant information: ## Additional context `torch.set_default_tensor_type('torch.cuda.FloatTensor')` Uncommenting such line solves issue partially. Issue with CUDA tensor disappears but those sentences could not be encoded properly ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-10-f0b27f7cd838> in <module> ----> 1 encode(['cybersecurity', 'cyber security', 'agriculture', 'data']).shape <ipython-input-9-138f4526e218> in encode(input) 12 def encode(input): 13 with nlp.device_placement(): ---> 14 return np.array(nlp(input)).mean(axis=1) ~/anaconda3/lib/python3.6/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs) 442 443 def __call__(self, *args, **kwargs): --> 444 return super().__call__(*args, **kwargs).tolist() 445 446 ~/anaconda3/lib/python3.6/site-packages/transformers/pipelines.py in __call__(self, *texts, **kwargs) 402 # Filter out features not available on specific models 403 inputs = self.inputs_for_model(inputs) --> 404 return self._forward(inputs) 405 406 def _forward(self, inputs): ~/anaconda3/lib/python3.6/site-packages/transformers/pipelines.py in _forward(self, inputs) 417 else: 418 with torch.no_grad(): --> 419 predictions = self.model(**inputs)[0].cpu() 420 421 return predictions.numpy() ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask) 733 head_mask = [None] * self.config.num_hidden_layers 734 --> 735 embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds) 736 encoder_outputs = self.encoder(embedding_output, 737 attention_mask=extended_attention_mask, ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 68 token_type_ids=token_type_ids, 69 position_ids=position_ids, ---> 70 inputs_embeds=inputs_embeds) 71 72 ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 188 token_type_embeddings = self.token_type_embeddings(token_type_ids) 189 --> 190 embeddings = inputs_embeds + position_embeddings + token_type_embeddings 191 embeddings = self.LayerNorm(embeddings) 192 embeddings = self.dropout(embeddings) RuntimeError: CUDA error: device-side assert triggered ``` even with CUDA_LAUNCH_BLOCKING=1 If we try to encode sentence by sentence everything works.
01-02-2020 14:22:35
01-02-2020 14:22:35
Hi @AlexanderKUA, thanks for reporting this issue. Can you give a try to the following commit 088daf78d45bed144fe2af84b538f573573bd01d and let us know if it solves your issue ? ```python from transformers import pipeline nlp = pipeline('feature-extraction', model='distilbert-base-uncased', device=0) print(nlp(['cybersecurity', 'cyber security', 'agriculture', 'data'])) ``` Thanks, Morgan<|||||>Hi @mfuntowicz I checked your commit. Yes, it solves the issue. Thanks a lot.
transformers
2,389
closed
update the config.is_decoder=True before initialize the decoder
Currently the `PreTrainedEncoderDecoder` class fails to initialize the "cross-attention layer" since it updates `decoder.config.is_decoder = True` after decoder initialization.
01-02-2020 13:38:11
01-02-2020 13:38:11
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2389?src=pr&el=h1) Report > Merging [#2389](https://codecov.io/gh/huggingface/transformers/pull/2389?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9261c7f771fccfa2a2cb78ae544adef2f6eb402b?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2389/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2389?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2389 +/- ## ======================================= Coverage 73.24% 73.24% ======================================= Files 87 87 Lines 15001 15001 ======================================= Hits 10988 10988 Misses 4013 4013 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2389?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2389?src=pr&el=footer). Last update [9261c7f...9261c7f](https://codecov.io/gh/huggingface/transformers/pull/2389?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>But these are not decoders, they're encoders with an additional language modeling head?<|||||>> But these are not decoders, they're encoders with an additional language modeling head? Oh, thanks to point out my mistake, I should actually modify the `modeling_encoder_decoder.py` file. I accidentally closed this pull request and made a new one #2435 .
transformers
2,388
closed
Can't load finetuned model properly.
I am making a model for joint bert. After trained my model, i try to eval before saving and it gives with %95 accuracy. But the problem is when i save this trained model and load it, i get the awful result. I hope you can help me about finding why i cant load properly. Here is some part of my code ``` class JointBertClassification(BertPreTrainedModel): def __init__(self, model_name, config, num_intent_labels, num_slot_labels, args): super(JointBertClassification, self).__init__(config) self.num_intent_labels = num_intent_labels self.num_slot_labels = num_slot_labels dropout_rate = args[ "dropout_rate" ] self.bert = BertModel.from_pretrained( model_name, config=self.bert_config ) # Load pretrained bert self.intent_classifier = IntentClassifier( config.hidden_size, num_intent_labels, dropout_rate ) self.slot_classifier = SlotClassifier( config.hidden_size, num_slot_labels, dropout_rate ) # self.init_weights() def forward( self, input_ids, attention_mask, token_type_ids, intent_label_ids, slot_label_ids, ): ... ``` ``` class JointModel: def __init__( self, model_type, model_name, intents=None, slots=None, args=None, use_cuda=None ): """ Initializes a Joint Model Args: model_type: The type of model model_name: Default Transformer model name or path to directory containing Transformer model file intents (optional): A list of all Intent labels. If not given ATIS intents will set as default. slots (optional): A list of all Slot labels. If not given ATIS slots will set as default. args (optional): Default args will be used if thi parameter is not provided. If provided, it should be a dict containing the args that should be changed in the default args. use_cuda (optional): Use GPU if available. Setting to False will force model to use CPU only. """ MODEL_CLASSES = {"bert": (BertConfig, JointBertClassification, BertTokenizer)} self.config_class, self.model_class, tokenizer_class = MODEL_CLASSES[model_type] if intents: self.intent_labels = intents else: self.intent_labels = pd.read_csv( "jointbert/data/atis/vocab.intent", header=None, index_col=0 ).index.tolist() self.num_intents = len(self.intent_labels) if slots: self.slot_labels = slots else: self.slot_labels = pd.read_csv( "jointbert/data/atis/vocab.slot", header=None, index_col=0 ).index.tolist() self.num_slots = len(self.slot_labels) self.tokenizer = tokenizer_class.from_pretrained(model_name) self.bert_config = self.config_class.from_pretrained(model_name) self.model = self.model_class( model_name, self.bert_config, num_slot_labels=self.num_slots, num_intent_labels=self.num_intents, args={"dropout_rate": 0.2}, ) if use_cuda: if torch.cuda.is_available(): self.device = torch.device("cuda") else: raise ValueError( "'use_cuda' set to True when cuda is unavaiable. Make sure CUDA is avaiable or set use_cuda=False" ) else: self.device = "cpu" self.results = {} self.args = { "output_dir": "outputs/", "cache_dir": "cache_dir/", "fp16": False, "fp16_opt_level": "O1", "max_seq_length": 128, "train_batch_size": 8, "gradient_accumulation_steps": 1, "eval_batch_size": 8, "num_train_epochs": 1, "weight_decay": 0, "learning_rate": 4e-5, "adam_epsilon": 1e-8, "warmup_ratio": 0.06, "warmup_steps": 0, "max_grad_norm": 1.0, "logging_steps": 50, "save_steps": 2000, "evaluate_during_training": False, "overwrite_output_dir": False, "reprocess_input_data": False, "process_count": 1, "n_gpu": 1, "silent": False, } if args: self.args.update(args) self.args["model_name"] = model_name self.args["model_type"] = model_type self.pad_token_label_id = CrossEntropyLoss().ignore_index ``` The Saving Part after training ``` def train_model( self, train_data, output_dir=None, show_running_loss=True, args=None, eval_df=None, ): if args: self.args.update(args) if self.args["silent"]: show_running_loss = False if not output_dir: output_dir = self.args["output_dir"] if ( os.path.exists(output_dir) and os.listdir(output_dir) and not self.args["overwrite_output_dir"] ): raise ValueError("--") self._move_model_to_device() train_dataset = self.load_and_cache_examples(train_data) global_set, tr_loss = self.train( train_dataset, output_dir, show_running_loss=show_running_loss, eval_df=eval_df, ) if not os.path.exists(output_dir): os.makedirs(output_dir) model_to_save = ( self.model.module if hasattr(self.model, "module") else self.model ) model_to_save.save_pretrained(output_dir) self.tokenizer.save_pretrained(output_dir) torch.save(self.args, os.path.join(output_dir, "training_args.bin")) print( "Training of {} model complete. Saved to {}. Training Loss : {}".format( self.args["model_type"], output_dir, tr_loss ) ) ```
01-02-2020 12:56:04
01-02-2020 12:56:04
Hi, I met the same problem as you mentioned! Do you fix it? my question is here, https://github.com/huggingface/transformers/issues/2402<|||||>No, not yet :/ @trueto I think it saves somehow wrong model but i am not sure. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,387
closed
Pre-trained model returns different outputs(random outputs)
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hello, I had recently play around Huggingface library, i wrote a simple script for question answering task. and for that, i used TFXLNetForQuestionAnsweringSimple model (pre-trained model), but i had different outputs for the same inputs and model each time i run the program. Did i miss something? here is my script: context = "Jim Henson was a puppeteer" question = "Who was Jim Henson ?" #XLNET tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased") model = TFXLNetForQuestionAnsweringSimple.from_pretrained("xlnet-base-cased") en_plus = tokenizer.encode_plus(context, question, add_special_tokens=True) en = en_plus['input_ids'] token_type_ids = en_plus['token_type_ids'] input_ids = tf.constant([en]) segments_tensors = tf.constant([token_type_ids]) outputs = model(input_ids) start_scores, end_scores = outputs[:2] ss = tf.argmax(start_scores.numpy()[0]).numpy() es = tf.argmax(end_scores.numpy()[0]).numpy() answer = tokenizer.decode(en[ss: es+1], clean_up_tokenization_spaces=True) print(answer) Thank you in advance for your help.
01-02-2020 12:17:56
01-02-2020 12:17:56
I found too. "last_hidden_states" was not fixed when I reload pretrain model. I think we miss something. My question is here <https://github.com/huggingface/transformers/issues/2386>, maybe help you.<|||||>Hi @houdaM97, this is due to the fact that the pretrained archive `xlnet-base-cased` does not contain keys for the question answering head, only for the base transformer model. This means that the question answering head will be randomly initialized and will output different results at each run. In order to see which keys are missing, you can set the flag `output_loading_info` to `True` in the `from_pretrained` method: ```py model, loading_info = TFXLNetForQuestionAnsweringSimple.from_pretrained("xlnet-base-cased", output_loading_info=True) print("Loading info", loading_info) # Loading info {'missing_keys': ['qa_outputs'], 'unexpected_keys': ['lm_loss'], 'error_msgs': []} ``` Here you can see that the `qa_outputs` value is missing and that the `lm_loss` value was present in the checkpoint but not needed for that specific model. In order to use this model for question answering you would first need to fine-tune this `qa_outputs` layers to a question answering task like SQuAD (you can use the [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py) script for this). We have a few models which are already fine-tuned on SQuAD, the list is available [here](https://huggingface.co/transformers/pretrained_models.html) (look for squad). You can also use some community fine-tuned models, which are visible [here](https://huggingface.co/models).<|||||>Hi @LysandreJik , does the tensorflow version of run_squad.py exist?<|||||>Hi @houdaM97, not yet but I'm actively working on it, alongside other projects. I'm aiming at next week for the first working version.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,386
closed
Different usage between BertModel and AlbertModel
## ❓ Questions & Help Hi~ ``` bert_path = 'D:/pretrain/pytorch/albert_base/' tokenizer = BertTokenizer.from_pretrained(bert_path) BERT = BertModel.from_pretrained(bert_path) ... with torch.no_grad(): last_hidden_states = BERT(input_ids)[0] ``` I found ```last_hidden_states``` was not fixed when I reload ```BertModel.from_pretrained(bert_path)```. ``` bert_path = 'D:/pretrain/pytorch/albert_base/' tokenizer = BertTokenizer.from_pretrained(bert_path) BERT = AlbertModel.from_pretrained(bert_path) ... with torch.no_grad(): last_hidden_states = BERT(input_ids)[0] ``` I found ```last_hidden_states ``` was fixed. But When I tried ``` bert_path = 'D:/pretrain/pytorch/chinese_roberta_wwm_ext/' tokenizer = BertTokenizer.from_pretrained(bert_path) BERT = RobertaModel.from_pretrained(bert_path) ... with torch.no_grad(): last_hidden_states = BERT(input_ids)[0] ``` I found ```last_hidden_states``` was still not fixed. ``` bert_path = 'D:/pretrain/pytorch/chinese_roberta_wwm_ext/' tokenizer = BertTokenizer.from_pretrained(bert_path) BERT = BertModel.from_pretrained(bert_path) ... with torch.no_grad(): last_hidden_states = BERT(input_ids)[0] ``` I found ```last_hidden_states``` was fixed. Is there any difference in their usage between BertModel, AlbertModel and RobertaModel? In my past projects, I used BERT(freeze)+LSTM. This is the first time to use ALBERT. Thanks~ <!-- A clear and concise description of the question. -->
01-02-2020 09:39:05
01-02-2020 09:39:05
Did you do model.eval() to disable dropout and norm before torch.no_grad()? <|||||>Yes. Because they didn‘t’ throw any exception, I'm a little confused about their usage. ``` import torch from transformers import BertTokenizer, BertModel from transformers import AlbertTokenizer, AlbertModel from transformers import RobertaTokenizer, RobertaModel device = 'cuda:0' # https://storage.googleapis.com/albert_models/albert_base_zh.tar.gz bert_path = 'D:/pretrain/pytorch/albert_base/' tokenizer = BertTokenizer.from_pretrained(bert_path) BERT = AlbertModel.from_pretrained(bert_path) # fixed ''' bert_path = 'D:/pretrain/pytorch/albert_base/' tokenizer = BertTokenizer.from_pretrained(bert_path) BERT = BertModel.from_pretrained(bert_path) # random output ''' ''' # https://drive.google.com/open?id=1eHM3l4fMo6DsQYGmey7UZGiTmQquHw25 bert_path = 'D:/pretrain/pytorch/chinese_roberta_wwm_ext/' tokenizer = BertTokenizer.from_pretrained(bert_path) BERT = BertModel.from_pretrained(bert_path) # fixed ''' ''' bert_path = 'D:/pretrain/pytorch/chinese_roberta_wwm_ext/' tokenizer = BertTokenizer.from_pretrained(bert_path) BERT = RobertaModel.from_pretrained(bert_path) # random output ''' BERT.eval() BERT = BERT.to(device) text_seqs = [] segments_ids = [] text_seq = tokenizer.convert_tokens_to_ids(['[CLS]', '我', '爱', '北', '京', '[SEP]', '[PAD]']) text_seqs.append(text_seq) segments_ids.append([0] * 7) text_seqs = torch.LongTensor(text_seqs).to(device) segments_ids = torch.LongTensor(segments_ids).to(device) mask_bert = torch.where(text_seqs == 0, torch.zeros_like(text_seqs), torch.ones_like(text_seqs)) with torch.no_grad(): sentence_features, _ = BERT(text_seqs, token_type_ids=segments_ids, attention_mask=mask_bert) sentence_features = sentence_features[-1] for i in sentence_features: print(i[:4]) ```<|||||>@renjunxiang, you seem to be using the *same pretrained* checkpoint for both BERT and ALBERT. This should crash as these models are not the same. Do you face the same issue when loading from pretrained checkpoints hosted on our S3 (`bert-base-cased` and `albert-base-v2` for example) ?<|||||>@LysandreJik Yes, I used same pretrained Chinese albert model provided by Google(```albert_base_zh.tar```) and I used ```convert_albert_original_tf_checkpoint_to_pytorch.py``` to transform the model. Because ```BertModel``` and ```AlbertModel``` didn‘t’ throw any exception, I thought they are interchangeable. Maybe the reason of random output is the missing key between ```BertModel``` and ```AlbertModel```? like <https://github.com/huggingface/transformers/issues/2387#issuecomment-571586232> ```bert-base-cased``` and ```albert-base-v2``` are constrained to the function(```BertModel``` and ```AlbertModel```), so they are not interchangeable. In my past projects, I used ```BertModel.from_pretrained``` to load pretrained model such as ```bert-base-chinese``` and ```chinese_roberta_wwm_ext```. I found ```RobertaModel``` could load ```chinese_roberta_wwm_ext``` and didn‘t’ throw any exception, but the output was random. So is there some different usage between ```RobertaModel``` and ```BertModel``` if I want to get the ```last_hidden_states```? In my mind Roberta is one of BERT. thanks~ <|||||>It's not really clear what you are trying to say. The models are obviously different, so use the appropriate init for the appropriate model (BERT for BERT weights, RoBERTa for RoBERTa weights). That being said, retrieving the last hidden states should be similar. You can compare the docs: - [RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html#robertamodel) - [BERT](https://huggingface.co/transformers/model_doc/bert.html#bertmodel)<|||||>Thanks! I'll check it out.
transformers
2,385
closed
The method os.rename() in file_utils.py make a permissionError
## ❓ Questions & Help Is someone happen to this question? <!-- A clear and concise description of the question. --> ![image](https://user-images.githubusercontent.com/22883367/71649688-af981c00-2d4b-11ea-811b-7fd9ebef4776.png)
01-02-2020 02:37:06
01-02-2020 02:37:06
I have the same problem when downloading the pre-trained tokenizer. I also need help!<|||||>> I have the same problem when downloading the pre-trained tokenizer. I also need help! Online download often occur different problems,so i download it first and use it locally.<|||||>Ok this should be solved on master now that #2384 is merged
transformers
2,384
closed
Releasing file lock
`With` scope creates a file lock, which leads to the following error: INFO:filelock:Lock 1408081097608 released on C:\Users\dimag\.cache\torch\transformers\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084.lock Traceback (most recent call last): File "C:\Users\dimag\Anaconda3\envs\pytorch\lib\site-packages\transformers\tokenization_utils.py", line 398, in _from_pretrained resume_download=resume_download, File "C:\Users\dimag\Anaconda3\envs\pytorch\lib\site-packages\transformers\file_utils.py", line 212, in cached_path user_agent=user_agent, File "C:\Users\dimag\Anaconda3\envs\pytorch\lib\site-packages\transformers\file_utils.py", line 392, in get_from_cache os.rename(temp_file.name, cache_path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\dimag\\.cache\\torch\\transformers\\tmpnhzxze8u' -> 'C:\\Users\\dimag\\.cache\\torch\\transformers\\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084'
01-01-2020 22:51:12
01-01-2020 22:51:12
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=h1) Report > Merging [#2384](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/80faf22b4ac194061a08fde09ad8b202118c151e?src=pr&el=desc) will **not change** coverage. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2384/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2384 +/- ## ======================================= Coverage 73.24% 73.24% ======================================= Files 87 87 Lines 14989 14989 ======================================= Hits 10979 10979 Misses 4010 4010 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2384/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `70.33% <100%> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=footer). Last update [80faf22...d0e594f](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@aaugustin could you have a quick look at this PR related to the filelock system?<|||||>This isn't directly related to the file lock system. Rather, it's related to moving the file rather than copying it. Given the current implementation, closing the file before moving it (which is all this PR does) looks safe to me. We're still within the lock-protected section. Could you take this opportunity remove the following two lines? ```python # we are copying the file before closing it, so flush to avoid truncation temp_file.flush() ``` Indeed, you're now closing the file before copying it. (To be honest, I should have removed them when I stopped copying the file and started moving it instead.)<|||||>@aaugustin I agree, `.flush()` is unnecessary, thanks for pointing it out. I've made the change.<|||||>Ok great thanks @dimagalat and @aaugustin
transformers
2,383
closed
clarification on output
Hi, On using bert_uncased on the following sentence: `Hello this is my dog` and to get attentions I use: `last_hidden_states, pooler_outputs, hidden_states, attentions = outputs` `attentions` gives: a tuple of 12 tensors where each tensor is of size [1,12,5,5] I was wondering if the 12 tensors in the tuple are for each attention head or hidden layer. Thanks!
01-01-2020 22:05:59
01-01-2020 22:05:59
transformers
2,382
closed
Proposition to include community models in readme
01-01-2020 19:44:20
01-01-2020 19:44:20
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2382?src=pr&el=h1) Report > Merging [#2382](https://codecov.io/gh/huggingface/transformers/pull/2382?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/629b22adcfe340c4e3babac83654da2fbd1bbf89?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2382/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2382?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2382 +/- ## ======================================= Coverage 73.24% 73.24% ======================================= Files 87 87 Lines 14989 14989 ======================================= Hits 10979 10979 Misses 4010 4010 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2382?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2382?src=pr&el=footer). Last update [629b22a...a229a68](https://codecov.io/gh/huggingface/transformers/pull/2382?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
2,381
closed
how to use distilledgpt2
Hi I want to use distilledgpt2, I cannot see the config file and modeling files, could you please assist me how to use it thanks
01-01-2020 15:49:02
01-01-2020 15:49:02
Hi, you can use it as such: ```py from transformers import GPT2Model, GPT2Tokenizer model = GPT2Model.from_pretrained("distilgpt2") tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2") ``` You can see the list of available models in the [pretrained section of our documentation](https://huggingface.co/transformers/pretrained_models.html).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,380
closed
errors encountered with run_lm_finetuning.py
Hi I am using run_lm_finetuning.py, I encountered the following issues: - block_size value is by default = -1, which creates the following error, can be solved by setting the default value to 512: ``` File "run_lm_finetuning.py", line 712, in <module> main() File "run_lm_finetuning.py", line 662, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_lm_finetuning.py", line 198, in train train_sampler = RandomSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset) File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer36/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 64, in __init__ "value, but got num_samples={}".format(self.num_samples)) ValueError: num_samples should be a positive integeral value, but got num_samples=0 ``` - global_step = int(args.model_name_or_path.split("-")[-1].split("/")[0]) can crash, let assume the "args.model_name_or_path=gpt2" then the result of the expression is int(""), which will crash, maybe setting it to 0? - when running the script for bert model I got also the following error, I am using pytorch 1.2. ``` (transformer) rkarimi@italix17:/idiap/user/rkarimi/dev/lm_heads$ python run_lm_finetuning.py --output_dir=/idiap/temp/rkarimi/lm_heads/distilbert --model_type=distilbert --model_name_or_path=/idiap/temp/rkarimi/pretrained_transformers/bert_distil --do_train --train_data_file=/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.train.raw --do_eval --eval_data_file=/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.test.raw --mlm --block_size=511 To use data.metrics please install scikit-learn. See https://scikit-learn.org/stable/index.html 01/02/2020 16:53:27 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0, distributed training: False, 16-bits training: False 01/02/2020 16:53:27 - INFO - transformers.configuration_utils - loading configuration file /idiap/temp/rkarimi/pretrained_transformers/bert_distil/config.json 01/02/2020 16:53:27 - INFO - transformers.configuration_utils - Model config { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "finetuning_task": null, "hidden_dim": 3072, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "is_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "max_position_embeddings": 512, "n_heads": 12, "n_layers": 6, "num_labels": 2, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pruned_heads": {}, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "torchscript": false, "use_bfloat16": false, "vocab_size": 30522 } 01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - Model name '/idiap/temp/rkarimi/pretrained_transformers/bert_distil' not found in model shortcut name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). Assuming '/idiap/temp/rkarimi/pretrained_transformers/bert_distil' is a path or url to a directory containing tokenizer files. 01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - Didn't find file /idiap/temp/rkarimi/pretrained_transformers/bert_distil/added_tokens.json. We won't load it. 01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - Didn't find file /idiap/temp/rkarimi/pretrained_transformers/bert_distil/special_tokens_map.json. We won't load it. 01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - Didn't find file /idiap/temp/rkarimi/pretrained_transformers/bert_distil/tokenizer_config.json. We won't load it. 01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - loading file /idiap/temp/rkarimi/pretrained_transformers/bert_distil/vocab.txt 01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - loading file None 01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - loading file None 01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - loading file None 01/02/2020 16:53:27 - INFO - transformers.modeling_utils - loading weights file /idiap/temp/rkarimi/pretrained_transformers/bert_distil/pytorch_model.bin 01/02/2020 16:53:28 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=511, cache_dir='', config_name='', device=device(type='cpu'), do_eval=True, do_lower_case=False, do_train=True, eval_all_checkpoints=False, eval_data_file='/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.test.raw', evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, local_rank=-1, logging_steps=50, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='/idiap/temp/rkarimi/pretrained_transformers/bert_distil', model_type='distilbert', n_gpu=0, no_cuda=False, num_train_epochs=1.0, output_dir='/idiap/temp/rkarimi/lm_heads/distilbert', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=50, save_total_limit=None, seed=42, server_ip='', server_port='', tokenizer_name='', train_data_file='/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.train.raw', warmup_steps=0, weight_decay=0.0) 01/02/2020 16:53:28 - INFO - __main__ - Creating features from dataset file at /idiap/temp/rkarimi/resources/wikitext-2-raw 01/02/2020 16:53:53 - INFO - __main__ - Saving features into cached file /idiap/temp/rkarimi/pretrained_transformers/bert_distil_cached_lm_511_wiki.train.raw 01/02/2020 16:53:53 - INFO - __main__ - ***** Running training ***** 01/02/2020 16:53:53 - INFO - __main__ - Num examples = 4303 01/02/2020 16:53:53 - INFO - __main__ - Num Epochs = 1 01/02/2020 16:53:53 - INFO - __main__ - Instantaneous batch size per GPU = 4 01/02/2020 16:53:53 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4 01/02/2020 16:53:53 - INFO - __main__ - Gradient Accumulation steps = 1 01/02/2020 16:53:53 - INFO - __main__ - Total optimization steps = 1076 01/02/2020 16:53:53 - INFO - __main__ - Continuing training from checkpoint, will skip to saved global_step 01/02/2020 16:53:53 - INFO - __main__ - Continuing training from epoch 0 01/02/2020 16:53:53 - INFO - __main__ - Continuing training from global step 0 01/02/2020 16:53:53 - INFO - __main__ - Will skip the first 0 steps in the first epoch Epoch: 0%| | 0/1 [00:00<?, ?it/sTraceback (most recent call last): | 0/1076 [00:00<?, ?it/s] File "run_lm_finetuning.py", line 738, in <module> main() File "run_lm_finetuning.py", line 688, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_lm_finetuning.py", line 325, in train outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/transformers/modeling_distilbert.py", line 540, in forward inputs_embeds=inputs_embeds) File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/transformers/modeling_distilbert.py", line 477, in forward inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim) File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/transformers/modeling_distilbert.py", line 96, in forward position_embeddings = self.position_embeddings(position_ids) # (bs, max_seq_length, dim) File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/functional.py", line 1467, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows. at /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:237 ``` The issue will resolve by setting smaller block_size <= 510, it would be very nice to document this in the codes that one needs to set the block_size <= 510 as a temporary solution. thanks - In mask_tokens function, the following lines needs to be set to -1 not -100 which is the ignore_index used in the "BertForMaskedLM" model: labels[~masked_indices] = -100 => -1 thanks.
01-01-2020 15:29:54
01-01-2020 15:29:54
Hello I also got the same error while running BERT. Traceback (most recent call last): File "code/transformers-2.3.0/examples/run_lm_finetuning.py", line 713, in <module> main() File "code/transformers-2.3.0/examples/run_lm_finetuning.py", line 663, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "code/transformers-2.3.0/examples/run_lm_finetuning.py", line 268, in train global_step = int(args.model_name_or_path.split("-")[-1].split("/")[0]) ValueError: invalid literal for int() with base 10: 'pytorch' Could anyone help?<|||||>@calusbr Hi, for the error you reported if you set global_step = 0 it should work. <|||||>Hi, thank you for raising this issue. Could you please let me know if 27c1b656cca75efa0cc414d3bf4e6aacf24829de fixed this issue by trying the updated script?<|||||>Hello, to solve this problem I added my checkpoint to a folder that has the same Transformer output. **new folder -> chekpoint-0** Folders: | chekpoint-0 | vocab.txt | pytorch_model.bin | config.json global_step = int(args.model_name_or_path.split("-")[-1].split("/")[0]) **Result: global_step = 0** <|||||>@rabeehk hello! I am also faced with the "ValueError: num_samples should be a positive integeral value, but got num_samples=0", Have you fixed this problem? thank you~<|||||>@LysandreJik I tried it 2020-1-9, It seems that this problem "ValueError: num_samples should be a positive integeral value, but got num_samples=0" still exists...<|||||>Hi I tested it, it does fix the first issue, thanks, but as I wrote in the first email, there are a couple of more errors, currently I got this errors, thanks: (transformer) rkarimi@vgnc002:/idiap/user/rkarimi/dev/lm_heads$ python run_lm_original.py --output_dir=/idiap/temp/rkarimi/lm_heads/bert_original --model_type=bert --model_name_or_path=/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/ --do_train --train_data_file=/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.train.raw --do_eval --eval_data_file=/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.test.raw --mlm --block_size 510 --overwrite_output_dir --num_train_epochs 3 --evaluate_during_training 01/09/2020 09:37:59 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False 01/09/2020 09:37:59 - INFO - transformers.configuration_utils - loading configuration file /idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/config.json 01/09/2020 09:37:59 - INFO - transformers.configuration_utils - Model config { "attention_probs_dropout_prob": 0.1, "finetuning_task": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "num_labels": 2, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pruned_heads": {}, "torchscript": false, "type_vocab_size": 2, "use_bfloat16": false, "vocab_size": 30522 } 01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - Model name '/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1). Assuming '/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/' is a path or url to a directory containing tokenizer files. 01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - Didn't find file /idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/added_tokens.json. We won't load it. 01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - Didn't find file /idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/special_tokens_map.json. We won't load it. 01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - Didn't find file /idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/tokenizer_config.json. We won't load it. 01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - loading file /idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/vocab.txt 01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - loading file None 01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - loading file None 01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - loading file None 01/09/2020 09:37:59 - INFO - transformers.modeling_utils - loading weights file /idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/pytorch_model.bin 01/09/2020 09:38:04 - INFO - transformers.modeling_utils - Weights from pretrained model not used in BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias'] 01/09/2020 09:38:09 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=510, cache_dir='', config_name='', device=device(type='cuda'), do_eval=True, do_lower_case=False, do_train=True, eval_all_checkpoints=False, eval_data_file='/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.test.raw', evaluate_during_training=True, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, local_rank=-1, logging_steps=50, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/', model_type='bert', n_gpu=1, no_cuda=False, num_train_epochs=3.0, output_dir='/idiap/temp/rkarimi/lm_heads/bert_original', overwrite_cache=False, overwrite_output_dir=True, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=50, save_total_limit=None, seed=42, server_ip='', server_port='', tokenizer_name='', train_data_file='/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.train.raw', warmup_steps=0, weight_decay=0.0) 01/09/2020 09:38:09 - INFO - __main__ - Loading features from cached file /idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/_cached_lm_510_wiki.train.raw 01/09/2020 09:38:09 - INFO - __main__ - ***** Running training ***** 01/09/2020 09:38:09 - INFO - __main__ - Num examples = 4312 01/09/2020 09:38:09 - INFO - __main__ - Num Epochs = 3 01/09/2020 09:38:09 - INFO - __main__ - Instantaneous batch size per GPU = 4 01/09/2020 09:38:09 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4 01/09/2020 09:38:09 - INFO - __main__ - Gradient Accumulation steps = 1 01/09/2020 09:38:09 - INFO - __main__ - Total optimization steps = 3234 01/09/2020 09:38:09 - INFO - __main__ - Starting fine-tuning. Epoch: 0%| | 0/3 [00:00<?, ?it/s/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed. Traceback (most recent call last): File "run_lm_original.py", line 717, in <module> main() File "run_lm_original.py", line 667, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_lm_original.py", line 316, in train loss.backward() File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/tensor.py", line 118, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: merge_sort: failed to synchronize: device-side assert triggered Epoch: 0%| | 0/3 [00:00<?, ?it/s] Iteration: 0%| Best Rabeeh On Tue, Jan 7, 2020 at 4:19 PM Lysandre Debut <[email protected]> wrote: > Hi, thank you for raising this issue. Could you please let me know if > 27c1b65 > <https://github.com/huggingface/transformers/commit/27c1b656cca75efa0cc414d3bf4e6aacf24829de> > fixed this issue by trying the updated script? > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/2380?email_source=notifications&email_token=ABP4ZCFDVP5F63P244QV3EDQ4SMPHA5CNFSM4KB3TOB2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIJGNHA#issuecomment-571631260>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCGGGP5IA3UW4OEN6DLQ4SMPHANCNFSM4KB3TOBQ> > . > <|||||>@rabeehk, concerning your first issue: > block_size value is by default = -1, which creates the following error, can be solved by setting the default value to 512 [the very first usage of `args.block_size`](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L639-L642) is to check if it is a negative value (e.g. -1) and to put it to the maximum model length. Is this not working in your case? > The issue will resolve by setting smaller block_size <= 510, it would be very nice to document this in the codes that one needs to set the block_size <= 510 as a temporary solution. thanks This should be solved by the previously mentioned lines as well. > In mask_tokens function, the following lines needs to be set to -1 not -100 which is the ignore_index used in the "BertForMaskedLM" model: labels[~masked_indices] = -100 => -1 This is not the case anymore, as you can see in the [`BertForMaskedLM` source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1001). The examples are maintained to work with the current `master` branch, and not a specific release. If you want to run scripts with a specific version, you can get them from a specific version tag on GitHub, e.g. [version 2.3.0](https://github.com/huggingface/transformers/tree/v2.3.0). Please let me know if you can see why the block size doesn't seem to be set to the maximum value, I'll fix it if it is an issue with the script. Thank you @rabeehk!<|||||>@rabeehk Hi ! May I ask you that you fixed the problem ""ValueError: num_samples should be a positive integeral value, but got num_samples=0" because you set the "global_step = 0" ? like this: `try: # set global_step to gobal_step of last saved checkpoint from model path checkpoint_suffix = args.model_name_or_path.split("-")[-1].split("/")[0] global_step = int(checkpoint_suffix) epochs_trained = global_step // (len(train_dataloader) // args.gradient_accumulation_steps) steps_trained_in_current_epoch = global_step % (len(train_dataloader) // args.gradient_accumulation_steps)` Should I change the "global_step = int(checkpoint_suffix)" to "global_step = 0" ? thanks !<|||||>Hi No. You need to set block-size to a positive number try with 510 maybe. Best Rabeeh On Thu, Jan 9, 2020, 12:14 PM JiangYanting <[email protected]> wrote: > @rabeehk <https://github.com/rabeehk> Hi ! May I ask you that you fixed > the problem ""ValueError: num_samples should be a positive integeral value, > but got num_samples=0" because you set the "global_step = 0" ? like this: > > try: # set global_step to gobal_step of last saved checkpoint from model > path checkpoint_suffix = > args.model_name_or_path.split("-")[-1].split("/")[0] global_step = > int(checkpoint_suffix) epochs_trained = global_step // > (len(train_dataloader) // args.gradient_accumulation_steps) > steps_trained_in_current_epoch = global_step % (len(train_dataloader) // > args.gradient_accumulation_steps) > > Should I change the "global_step = int(checkpoint_suffix)" to "global_step > = 0" ? thanks ! > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/2380?email_source=notifications&email_token=ABP4ZCBCKG7SAYK4YPHVPFTQ44BJJA5CNFSM4KB3TOB2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIP6HDQ#issuecomment-572515214>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCB3F7ZALBP6RG4HI63Q44BJJANCNFSM4KB3TOBQ> > . > <|||||>Changing from 512 to 510 worked for me. I would think that we should be able to use 512, the max size for Bert input? Or there something I'm overlooking? <|||||>Hi, I just encountered the same error finetuning a custom gpt-2 model with `run_language_modeling.py` on Colab. ``` Traceback (most recent call last): File "run_language_modeling.py", line 799, in <module> main() File "run_language_modeling.py", line 749, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_language_modeling.py", line 245, in train train_sampler = RandomSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/sampler.py", line 94, in __init__ "value, but got num_samples={}".format(self.num_samples)) ValueError: num_samples should be a positive integer value, but got num_samples=0 ``` I solved by specifying the `--block_size`, as @rabeehk said. Might be worth mentioning that in [your docs](https://huggingface.co/transformers/examples.html#gpt-2-gpt-and-causal-language-modeling), or have a default setup that works out of the box ? I also had to dig into the code to find the `--should_continue` and `--overwrite_output_dir` flags to continue training, is there a page where that is discussed by any chance? As an aside, I can't seem to find a flag to print the loss during training? I see there is a log/save step every 500 iterations, but it doesn't give any of these stats. Is there something super obvious I am missing? Thanks in any case!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,379
closed
finetune transformer
Hi I greatly appreciate showing me how to pretrain a transformer model like BERT, I mean not finetuning but pretraining, Is there any code in your repo doing this? thanks a lot for your help.
01-01-2020 10:11:01
01-01-2020 10:11:01
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,378
closed
added pad_to_max_length option to batch_encode_plus
12-31-2019 17:48:44
12-31-2019 17:48:44
Thanks @ameasure, do you think you could run the quality tool as defined in the contributing guidelines for that test `check_code_quality` to pass?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,377
closed
Text generation on GPU: Moved the encoded_prompt to correct device
Moved the `encoded_prompt` to the correct device to solve the problem when using GPU. Solves the problem mentioned in #227 #1414 #2360
12-31-2019 12:27:11
12-31-2019 12:27:11
Thank you @alberduris !
transformers
2,376
closed
Classification of sentence pair with two different languages
I have been working on multi-lingual sentence similarity (English-Hindi) ### for example: > Sentence 1 (English) > Sentence 2 (Translation in Hindi of Sentence 1) > Sentence Similarity Score. Any idea on how do I train for sentence similarity using `xlm-mlm-xnli15-1024`?
12-31-2019 12:23:22
12-31-2019 12:23:22
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,375
closed
Is the position of the scheduler.step() correct?
## ❓ Questions & Help In the [lm_finetuning_file](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L285-L319), the scheduler used is "get_linear_schedule_with_warmup" which in turn uses the underlying "LambdaLR" [ref](https://github.com/huggingface/transformers/blob/594ca6deadb6bb79451c3093641e3c9e5dcfa446/src/transformers/optimization.py#L47); On a careful study of the file, the "scheduler" is being called for every batch, but that isn't what's as per the official PyTorch [docs](https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.LambdaLR) to change LR since they are calling it at "epoch-level", ``` >>> lambda1 = lambda epoch: epoch // 30 >>> lambda2 = lambda epoch: 0.95 ** epoch >>> scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2]) >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step() ``` I meant it makes sense to me to do it at batch level change the lr etc but not sure why the PyTorch has it differently? Thanks.
12-31-2019 08:02:55
12-31-2019 08:02:55
Why this issue were left without an answer? I don't know what is the answer, but it seems that this scheduler should be executed at batch-level since there are other examples using batch-level instead of epoch-level: https://github.com/huggingface/transformers/blob/8e8384663d716d4b5a4f510070ff954fc0ba4a52/examples/research_projects/bert-loses-patience/run_glue_with_pabee.py. Anyway, LambdaLR is a generic LR scheduler which PyTorch provides in order to implement other LR schedulers. It is true that PyTorch docs talks about epoch-level, but there are other LR schedulers, like [CyclicLR scheduler](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html), which explicitely indicates that has to be executed at batch-level. Since LambdaLR is a generic LR scheduler, this scheduler will need to be executed at batch-level or epoch-level depending on the specifid LR scheduler implemented. For the linear LR scheduler of the issue, I guess that the correct is to be executed at batch-level, or even could be adapted to epoch level if you want, but taking a look at the scripts of the repository, they use batch-level.
transformers
2,374
closed
Fine-tuning BertAbs on new dataset?
## 🚀 Feature Hi, I wonder if there can be a script for fine-tuning BertAbs on new dataset? Or if you have some hint to provide about this task? Not sure how to access loss from ```modeling_bertabs.py```. Thanks
12-31-2019 03:09:31
12-31-2019 03:09:31
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>duplicate of #2597, no update yet sadly.
transformers
2,373
closed
RuntimeError: The size of tensor a (30524) must match the size of tensor b (30522) at non-singleton dimension 2 --- run_lm_finetuning.py
## 🐛 Bug I am using ```run_lm_finetuning.py``` to fine-tune bert-base-uncased on my dataset and I am getting the following error: I also truncated my dataset to have num examples dividable by the batch_size. Note that fine-tuning gpt2 on the same dataset works fine. ``` 12/30/2019 17:23:28 - INFO - __main__ - ***** Running training ***** 12/30/2019 17:23:28 - INFO - __main__ - Num examples = 4048 12/30/2019 17:23:28 - INFO - __main__ - Num Epochs = 3 12/30/2019 17:23:28 - INFO - __main__ - Instantaneous batch size per GPU = 4 12/30/2019 17:23:28 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4 12/30/2019 17:23:28 - INFO - __main__ - Gradient Accumulation steps = 1 12/30/2019 17:23:28 - INFO - __main__ - Total optimization steps = 3036 Epoch: 0%| | 0/3 [00:00<?, ?it/sTraceback (most recent call last): | 0/1012 [00:00<?, ?it/s] File "run_lm_finetuning.py", line 722, in <module> main() File "run_lm_finetuning.py", line 672, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_lm_finetuning.py", line 306, in train outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_bert.py", line 990, in forward prediction_scores = self.cls(sequence_output) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_bert.py", line 496, in forward prediction_scores = self.predictions(sequence_output) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_bert.py", line 486, in forward hidden_states = self.decoder(hidden_states) + self.bias RuntimeError: The size of tensor a (30524) must match the size of tensor b (30522) at non-singleton dimension 2 ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: Ubuntu * Python version: python3.6 * PyTorch version: 1.3.0 * PyTorch Transformers version (or branch): * Using GPU ? Yes * Distributed of parallel setup ? No * Any other relevant information:
12-31-2019 01:32:45
12-31-2019 01:32:45
Can you share the full command you are using to run the script?<|||||>> Can you share the full command you are using to run the script? > Can you share the full command you are using to run the script? Hi sure, this is the command (basically the same as the document): ``` python run_lm_finetuning.py --output_dir=output --model_type=bert --model_name_or_path=bert-base-uncased --do_train --train_data_file=$TRAIN_FILE --do_eval --eval_data_file=$T EST_FILE --mlm --output_dir=$OUTPUT_DIR/bert-fine --num_train_epochs 3 ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,372
closed
What is the "could not find answer" warning in squad.py
Hello, I am trying to run run_squad.py for BERT (italian-cased) with an italian version of squad. During the creation of features from dataset, I got some answer skipped like in the following: <img width="478" alt="Screenshot 2019-12-30 at 23 30 19" src="https://user-images.githubusercontent.com/26765504/71603304-81081e80-2b5c-11ea-8333-73608e3141a7.png"> Can you tell why is this happening and if this influences the overall accuracy of the training?
12-30-2019 22:31:58
12-30-2019 22:31:58
This means that the script that converts the examples to features can't find the answers it should be finding. Rather than trying to predict those, it ignores them. This means that these examples won't be used for training, reducing the total number of examples that will be used. If it is a small portion of the total number of examples, it shouldn't impact the resulting accuracy much. If it is a significant portion of the examples then it would be a good idea to look into it to see if there's a quick fix.<|||||>Hi @LysandreJik, thanks for the clarification. I noticed that for some of my data it happens that the the "text" field in "answers" field may differ from the one present in the "context" just because of some upper/lower letters mismatch. Do you think this could be avoided by using an uncased model?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@antocapp I had your same logs with a cased model. Now I'm using an uncased model, putting the flag `--do_lower_case` in [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) I expected not to have those warnings, instead they appeared anyway. I took a look in the doc and I saw that in [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py), examples are passed to the function `squad_convert_examples_to_features` [in this line](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py#L448). ```python features, dataset = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=args.max_seq_length, doc_stride=args.doc_stride, max_query_length=args.max_query_length, is_training=not evaluate, return_dataset="pt", threads=args.threads, ) ``` The tokenizer is created passing the argument `--do_lower_case` so it should tokenize putting the lower case to every token. Anyway the warning you see comes within [squad_convert_example_to_features](https://github.com/huggingface/transformers/blob/930153e7d2d658267b7630a047a4bfc85b86042d/src/transformers/data/processors/squad.py#L91) declaration. ```python def squad_convert_example_to_features( example, max_seq_length, doc_stride, max_query_length, padding_strategy, is_training ): features = [] if is_training and not example.is_impossible: # Get start and end position start_position = example.start_position end_position = example.end_position # If the answer cannot be found in the text, then skip this example. actual_text = " ".join(example.doc_tokens[start_position : (end_position + 1)]) cleaned_answer_text = " ".join(whitespace_tokenize(example.answer_text)) if actual_text.find(cleaned_answer_text) == -1: logger.warning("Could not find answer: '%s' vs. '%s'", actual_text, cleaned_answer_text) return [] tok_to_orig_index = [] orig_to_tok_index = [] all_doc_tokens = [] for (i, token) in enumerate(example.doc_tokens): orig_to_tok_index.append(len(all_doc_tokens)) sub_tokens = tokenizer.tokenize(token) for sub_token in sub_tokens: tok_to_orig_index.append(i) all_doc_tokens.append(sub_token) # code continues... ``` As you can see `actual_text` and `cleaned_answer_text` use `example.doc_tokens` and `example.answer_text` which already contain upper_case! `cleaned_answer_text` is searched within `actual_text` considering upper-case letters different from lower_case letters, so an example like _'Mantenere i precetti' vs 'mantenere i precetti'_ (like you told in the issue) would be discarded. Indeed the `tokenizer` hasn't tokenized yet in those lines so, even if the features could be created with lower_case, that check makes that example to be discardes, even if it could be considered! So what I made, is putting a `lower()` on every field of example before passing it to that function, changin [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) in this way: ```python # other code... else: processor = SquadV2Processor() if args.version_2_with_negative else SquadV1Processor() if evaluate: examples = processor.get_dev_examples(args.data_dir, filename=args.predict_file) else: examples = processor.get_train_examples(args.data_dir, filename=args.train_file) if args.do_lower_case: logger.info("Putting lower case to examples...") for example in examples: example.doc_tokens = [token.lower() for token in example.doc_tokens] example.question_text = example.question_text.lower() example.context_text = example.context_text.lower() if example.answer_text is not None: # for dev set example.answer_text = example.answer_text.lower() features, dataset = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=args.max_seq_length, doc_stride=args.doc_stride, max_query_length=args.max_query_length, is_training=not evaluate, return_dataset="pt", threads=args.threads, ) ``` I don't know if this can improve the results, but it avoids some discarded examples for sure 😊 @LysandreJik, is this a bug 🐛 ? Or maybe was there another trivial method to fix this?<|||||>Hi @paulthemagno, lowering every example was the same thing I did to solve the warning, although I lowered the dataset instead of editing the run_squad.py; but it is indeed the same thing. I uploaded just toady a model on the HF model hub (https://huggingface.co/antoniocappiello/bert-base-italian-uncased-squad-it). This was trained based on `dbmdz/bert-base-italian-uncased`; I tried also a training with Musixmatch Umberto but the F1 and EM were slightly lower (like 1 point percentage lower). But maybe running several experiments with different hyperparameters could lead to better results.
transformers
2,371
closed
Encounter an "index out of range problem"
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi! I'm currently using BERT to obtain sentence embeddings for Chinese text inputs. Things are fine for most cases. However when I am dealing with this text: ```python weibo_content = "貌似还是没有完全修复。 http://As.international.anger.grows.over.reports.of.mass.carnage.at.the.hands.of.the.Syrian.regime.a.U.N.Security.Council.draft.resolution.condemning.Syria.failed.to.be.adopted.Saturday.after.vetowielding.members.Russia.and.China.voted.against.it.Ambassadors.from.the.other.permanent.members.of.the.council..the.United.States.France.and.the.United.Kingdom..said.they.were.furious.at.Russia.and.China.for.failing.to.halt.the.worsening.bloody.violence.that.has.consumed.the.Middle.Eastern.nation.Thirteen.Security.Council.members.voted.in.favor.of.the.resolution.The.vote.was.a.major.diplomatic.setback.for.countries.hoping.to.send.a.unified.message.to.embattled.Syrian.President.Bashar.alAssad.and.also.for.opposition.groups.that.look.toward.the.United.Nations.for.support.Those.that.have.blocked.potentially.the.last.effort.to.resolve.this.peacefully.will.have.any.future.blood.spill.on.their.hands.U.S.Ambassador.Susan.Rice.told.CNN.The.people.of.Syria.have.yet.again.been.abandoned.by.this.Council.and.by.the.international.community.Some.Syrians.have.cried.out.for.international.action.to.stop.attacks.on.civilians.more.so.after.opposition.groups.said.at.least.321.civilians.were.killed.and.hundreds.wounded.in.the.city.of.Homs.in.the.past.two.days.The.opposition.Syrian.National.Council.blamed.government.forces.for.the.attack.in.Homs.calling.it.one.of.the.most.horrific.massacres.since.the.start.of.the.Syrian.uprising.Residential.buildings.and.homes.were.randomly.and.heavily.bombed.the.group.said.The.Local.Coordination.Committees.LCC.a.Syrian.opposition.group.said.90.people.had.been.killed.in.Syria.on.Saturday.including.61.in.Homs.10.in.Idlib.and.19.in.a.Damascus.suburb.In.a.bid.to.pressure.the.government.the.group.called.for.a.twoday.civil.strike.to.start.on.Sunday.Another.opposition.group.the.Syrian.Observatory.for.Human.Rights.reported.that.48.people.were.killed.across.Syria.on.Saturday.including.six.army.defectors.and.18.members.of.the.Syrian.security.forces" ``` something goes wrong... Okay, This text piece contains an url with many English words in it. But, I think using "bert-base-chinese" can still handle this situation. So the following code goes like: ```python import torch from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-chinese') model = BertModel.from_pretrained('./bert-base-chinese') model.eval() weibo_content = "[cls]" + weibo_content + "[sep]" # weibo_content is the target text to be extracted embeddings from tokenized_weibo_content = tokenizer.tokenize(weibo_content) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_weibo_content) segments_ids = [1] * len(tokenized_weibo_content) tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) with torch.no_grad(): encoded_layers, _ = model(tokens_tensor, segments_tensors) ``` In the last step of the above code, I meet with the following problem: ```python Traceback (most recent call last): File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-12-c7fe4edd73d7>", line 7, in <module> encoded_layers, _ = model(tokens_tensor, segments_tensors) File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\transformers\modeling_bert.py", line 735, in forward embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds) File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\transformers\modeling_bert.py", line 187, in forward position_embeddings = self.position_embeddings(position_ids) File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\torch\nn\modules\sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\torch\nn\functional.py", line 1484, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows. at C:\w\1\s\tmp_conda_3.6_111945\conda\conda-bld\pytorch_1572952852006\work\aten\src\TH/generic/THTensorEvenMoreMath.cpp:418 ``` Does anyone know why this happens?
12-30-2019 12:30:39
12-30-2019 12:30:39
BTW, I manually downloaded the pretrained model, and save it in dir "./bert-base-chinese"<|||||>@Yuejiang-li To me, the error msg indicates that the seq generated after tokenization of "weibo_content" has more than 512 tokens. The 512 is the max num of tokens in one seq allowed for the BERT model (embedding layer input size). You have to separate the "weibo_content" into at least two shorter sentences and feed them separately. You can print "tokenized_weibo_content" to confirm.<|||||>@FacingBugs That's true. I forgot that point... Thank you so much!<|||||>> That's true. I forgot that point... Thank you so much! Hi Yuejiang, Can you please share the way you have separated the content? I'm facing the same problem. Kind regards
transformers
2,370
closed
Pipelines: add PoS support
## 🚀 Feature As `Pipelines` were recently added for many tasks including NER, Sentiment Analysis, it'd be great to also enable Part-of-Speech tagging. ## Motivation PoS tagging is a very useful task, and often used as an evaluating downstream task for new models. ## Additional context Current available tasks for `Pipelines` are described [here](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L831).
12-30-2019 11:08:48
12-30-2019 11:08:48
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>We now have a more general `TokenClassificationPipeline`, @arnaudmiribel (this is just an alias to the `NerPipeline`)
transformers
2,369
closed
few changes due to the torch version inconsistency in summarization example
This small change intends to fix the issue #2297. it's generally a version inconsistent issue. in ver 1.1.0, torch.gt outputs: _torch.gt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]])) tensor([[ 0, 1], [ 0, 0]], dtype=torch.uint8)_ while in ver 1.2.0, it outputs: _torch.ge(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]])) tensor([[True, True], [False, True]])_ Thus, I added a version checking function, and revise the tensor type in tensor.gt()
12-30-2019 10:37:17
12-30-2019 10:37:17
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,368
closed
Clarification regarding past/layer_past in GPT-2
## ❓ Questions & Help I'm hoping to get some clarification regarding how past/layer_past are meant to work in GPT-2. My prior impression was that the query/key/value at every layer (other than the first) should be influenced by all tokens the model is able to see. As such, it shouldn't make much sense to use pre-computed key/value activations from prior steps. Could you clarify this issue? (Hope my question makes sense.)
12-30-2019 07:29:42
12-30-2019 07:29:42
the `past` variable in GTP-2 stores all previously computed key and value vectors. Because GPT-2 uses masked self-attention only the query vectors of previous tokens are updated at every step, but not the key and value vectors. Therefore the `past` variable can be used to speed up decoding. To better understand how GPT-2 works, I highly recommend reading [the Illustrated GPT-2](http://jalammar.github.io/illustrated-gpt2) especially the part: **The Illustrated Masked Self-Attention**<|||||>Thanks for the response! I'm still not sure this is intuitive for me. The linked article mentions: "Now in the next iteration, when the model processes the word robot, it does not need to generate query, key, and value queries for the a token. It just reuses the ones it saved from the first iteration:", which seems to imply Q, K and V are reused, whereas it seems we're only (optionally) reusing K and V. On the other hand, I'm not seeing where in the code that masked self-attention only affected the query vectors. It seems to be that attention masking is applied to the scoring vectors at each layer, and that should affect the generated Q/K/V for all downstream layers. It feels like there may be some key part of the intuition I'm missing, so thanks for the help.<|||||>I read the article @patrickvonplaten pointed. I am still very confused about the usage of attention when there are "past" vectors. The dimension just doesn't match. 19 is the batch size here; I am using gpt-2 small. 40 is the encoding seq len; 23 is the decoding seq len. Input document past[0] size: torch.Size([2, 19, 12, 40, 64]) to-decode-sequence input embedding: torch.Size([19, 23, 768]) mask of the input document: torch.Size([19, 40]) mask of the to-decode-sequence: torch.Size([19, 23]) concat of two masks: torch.Size([19, 63]) // 63=23+40 concat doesn't work // _decoder_output = self.transformer(past=presents,attention_mask=concat_attention_mask, inputs_embeds=gt_input_embedding) fails to-decode-mask doesn't work // _decoder_output = self.transformer(past=presents,attention_mask=partial_oracle_trajectory_mask,inputs_embeds=gt_input_embedding ) fails Error message for concat: ` attention_mask = attention_mask.view(-1, input_shape[-1]) RuntimeError: shape '[-1, 23]' is invalid for input of size 1197` Error message for to-decode-mask only `RuntimeError: The size of tensor a (63) must match the size of tensor b (23) at non-singleton dimension 3 ` @zphang any idea? not sure if this correlates with what you said tho. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi @jiacheng-xu - sorry for answering that late. I think the problem with the attention mask was recently fixed in PR #3033. The issue was also mentioned in #3031 I think. Let me know if you still have problems when using the current version of master.
transformers
2,367
closed
Load the google bert model(ckpt) from TFBertForPreTraining error
I want use the google chinese bert ckpt model in transforms, and env use tf2. the transforms can load the ckpt into pytorch model But I want load the ckpt into tf.keras model, How can I do this ? ![image](https://user-images.githubusercontent.com/12653212/71570221-ce5d9f00-2b0e-11ea-8595-7b8b22db87f2.png) model = BertForPreTraining.from_pretrained(checkpoint_path, from_tf=True, config=config) Success. But: ![image](https://user-images.githubusercontent.com/12653212/71570227-f0572180-2b0e-11ea-9978-f9c679a3e37b.png)
12-30-2019 06:17:59
12-30-2019 06:17:59
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,366
closed
How Can I load the google bert model(ckpt)?
TF2.0 ## ❓ Questions & Help import os pretrained_path = 'chinese_L-12_H-768_A-12' config_path = os.path.join(pretrained_path, 'bert_config.json') checkpoint_path = os.path.join(pretrained_path, 'bert_model.ckpt') vocab_path = os.path.join(pretrained_path, 'vocab.txt') config = BertConfig.from_json_file(config_path) model = TFBertForPreTraining.from_pretrained(checkpoint_path, config=config) ![image](https://user-images.githubusercontent.com/12653212/71568255-d3b3ed00-2b00-11ea-836b-5dc5ad7b9d86.png)  
12-30-2019 04:35:31
12-30-2019 04:35:31
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,365
closed
upgrading new transformer doesn't work
## ❓ Questions & Help Hi, I have pulled the repo again since a lot of stuff changed/added. When I try to use ```pip install --upgrade .``` command, nothing changes and I am stuck in the following step forever: ``` (py36) pytorch-transformers$ pip install --upgrade . Processing /home/pytorch-transformers ``` Plus, since a lot of folders are renamed and scripts moved to ```src``` folder, whenI try to do ```from transformers import BertTokenizer, BertModel, BertForMaskedLM``` I get following error: ``` ImportError Traceback (most recent call last) <ipython-input-1-34ecfe73cb1a> in <module> 1 import torch ----> 2 from transformers import BertTokenizer, BertModel, BertForMaskedLM, BertConfig, BertForPreTraining, BertConfig 3 4 # OPTIONAL: if you want to have more information on what's happening, activate the logger as follows 5 import logging ImportError: cannot import name 'BertTokenizer' from 'transformers' (unknown location) ``` Would you please help with this? What is the reason to move to src?
12-29-2019 21:23:57
12-29-2019 21:23:57
Previously you were relying on `transformers` being implicitly added to `PYTHONPATH` when you were working from the source of the repository. This breaks if you move to another directory, like `examples`. `pip install .` makes `transformers` available in your virtualenv regardless of where you're working. It's unclear to me why the installation is stuck. Could you run the following commands? ``` pip uninstall transformers pip --version pip install --verbose . ``` <|||||>Thanks for replying @aaugustin . my pip version is 19.1.1 However, I ended up cloning the whole repo again and install fresh. I will close it for now.<|||||>OK, that was going to be my next suggestion if the situation didn't improve!
transformers
2,364
closed
How to fine-tune PreTrainedEncoderDecoder on new dataset?
## ❓ Questions & Help Hi, Many thanks for your recent work implementing [this paper](https://arxiv.org/pdf/1907.12461.pdf). I wonder if you have or can provide script and documentation for fine-tuning PreTrainedEncoderDecoder on new dataset? Many thanks
12-29-2019 18:17:14
12-29-2019 18:17:14
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Seconded. Is there a way to fine-tune any seq2seq model in huggingface?<|||||>@Josh-Payne You can have a look at https://github.com/huggingface/transformers/blob/9df74b8bc42eedc496f7148b9370728054ca3b6a/src/transformers/modeling_encoder_decoder.py<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,363
closed
Finetuning on several tasks
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hello, Is it possible to finetune a Transformer on some dataset, and then finetune the model again on another dataset with a different number of output labels ? I tried this and got the following error: ``` RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([2, 768]) from checkpoint, the shape in current model is torch.Size([3, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([3]). ``` Thanks in advance
12-29-2019 16:47:10
12-29-2019 16:47:10
The `BertForSequenceClassification` model has a classifier head transforming Bert output into `num_labels` output. So if you change the classification output, it can't be loaded as you could see. The only hack you could do is to load the fine-tuned model with previous `num_labels`, then remove classifier head and replace it with a new classifier with your new `num_labels` (re-initialized) and fine-tune again. Yet, your Bert model was fine-tuned on a first classification task that is not the same as the second classification task. So if both tasks are too different on too different datasets, it's not sure it will learn anything... or not sure your previous head will give anything decent after that... <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,362
closed
Why albert has a print statement during forward?
## ❓ Questions & Help both AlbertTransformer and AlbertLayerGroup have print statements in the forward method, which messes up logging / printing during training
12-29-2019 15:10:34
12-29-2019 15:10:34
This was an issue with a previous version of transformers. Please upgrade it to a more recent version for the warning to go away. Thank you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,361
closed
Improve logging message in feature conversion functions
This PR adds the total number of examples to process to the log message produced during the feature conversion step.
12-29-2019 12:36:56
12-29-2019 12:36:56
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=h1) Report > Merging [#2361](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f75bf05ce6a05ef316363de129c29f2e00cacd7b?src=pr&el=desc) will **not change** coverage. > The diff coverage is `0%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2361/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2361 +/- ## ======================================= Coverage 73.23% 73.23% ======================================= Files 87 87 Lines 14985 14985 ======================================= Hits 10975 10975 Misses 4010 4010 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/2361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `19.6% <0%> (ø)` | :arrow_up: | | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/2361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `27.86% <0%> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=footer). Last update [f75bf05...dc69c5c](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Great, thanks @simonepri
transformers
2,360
closed
CTRL - RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'
## 🐛 Bug Hi, The error `RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'` arise while running CTRL using examples/run_generation.py Model I am using (Bert, XLNet....): **CTRL** Language I am using the model on (English, Chinese....): **English** The problem arise when using: running CTRL using run_generation.py `python examples/run_generation.py --model_type ctrl --model_name ctrl --temperature 0.2 --repetition 1.2` Full trace: ``` Traceback (most recent call last): File "examples/run_generation.py", line 236, in <module> main() File "examples/run_generation.py", line 222, in main repetition_penalty=args.repetition_penalty, File "/media/disk1/guytevet/venvs/py3/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 43, in decorate_no_grad return func(*args, **kwargs) File "/media/disk1/guytevet/transformers/src/transformers/modeling_utils.py", line 744, in generate effective_batch_size, File "/media/disk1/guytevet/transformers/src/transformers/modeling_utils.py", line 775, in _generate_no_beam_search outputs = self(**model_inputs) File "/media/disk1/guytevet/venvs/py3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/media/disk1/guytevet/transformers/src/transformers/modeling_ctrl.py", line 520, in forward inputs_embeds=inputs_embeds, File "/media/disk1/guytevet/venvs/py3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/media/disk1/guytevet/transformers/src/transformers/modeling_ctrl.py", line 388, in forward inputs_embeds = self.w(input_ids) File "/media/disk1/guytevet/venvs/py3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/media/disk1/guytevet/venvs/py3/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 118, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/media/disk1/guytevet/venvs/py3/lib/python3.6/site-packages/torch/nn/functional.py", line 1454, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index' ``` ## Environment * OS: Ubuntu 18.04.2 * Python version: 3.6 * PyTorch version: 1.0.1.post2 * PyTorch Transformers version (or branch): master, installed from source -e git+https://github.com/huggingface/transformers.git@f75bf05ce6a05ef316363de129c29f2e00cacd7b#egg=transformers * Using GPU ? Yes * Distributed of parallel setup ?
12-29-2019 08:55:27
12-29-2019 08:55:27
Upgrading torch to 1.3.1 solves the issue<|||||>I have the same problem even with torch==1.3.1 I think this should be re-opened<|||||>I have the same issue for generating with gpt2 Here is the error log: ``` File "run_generation.py", line 236, in <module> main() File "run_generation.py", line 222, in main repetition_penalty=args.repetition_penalty, File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad return func(*args, **kwargs) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_utils.py", line 744, in generate effective_batch_size, File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_utils.py", line 775, in _generate_no_beam_search outputs = self(**model_inputs) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 589, in forward inputs_embeds=inputs_embeds, File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 456, in forward inputs_embeds = self.wte(input_ids) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/functional.py", line 1484, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select ``` I have torch 1.3.0 installed.<|||||>@ehsan-soe check my last PR #2377, solves the issue. <|||||>@alberduris Thanks 👍 <|||||>This seems to be an issue with transformers 2.3.0, as I was able to run the generation code successfully by checkout tag v2.2.2<|||||>add device and assign the model to it ``` ... device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) ... ``` assign also the tensor to the device ``` ... sentence = 'Today, scientists confirmed the worst possible outcome: the massive asteroid will collide with Earth' context_tokens = tokenizer.encode(sentence, add_special_tokens=False) context = torch.tensor(context_tokens, dtype=torch.long) context = context.to(device) ... ```
transformers
2,359
closed
Confusion about the target_mapping parameter of the xlnet model
Why code at https://github.com/huggingface/transformers/blob/f75bf05ce6a05ef316363de129c29f2e00cacd7b/src/transformers/modeling_xlnet.py#L1029 is ` target_mapping[0, 0, -1] = 1.0`, i think it should be ` target_mapping[:, :, -1] = 1.0` And I‘m confused about `taget_mapping` parameter, What is the difference between it and the `perm_mask` parameter?
12-29-2019 08:20:11
12-29-2019 08:20:11
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,358
closed
Quickstart BERT Example: Assertion Error
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): Bert Language I am using the model on (English, Chinese....): English The problem arises when: * I run the official BERT Example in my local Jupyter Lab environment: Copy pasted the code and ran it in one cell. ## To Reproduce Steps to reproduce the behavior: 1. Download and install Transformers from source 2. Start Jupyter lab 3. Copy paste and run the Quickstart BERT Example <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior Expected the assertion to pass. ## Environment * OS: Oracle Linux Server 7.7 * Python version: 3.7.5 * PyTorch version: 1.3.1 * PyTorch Transformers version (or branch): master * Using GPU ? No * Distributed of parallel setup ? No ## Additional context <!-- Add any other context about the problem here. --> ![transformers_bug](https://user-images.githubusercontent.com/4649183/71553748-ff6e9e80-2a3e-11ea-813e-51d291d78870.png)
12-29-2019 07:28:52
12-29-2019 07:28:52
Not quite sure what happened. Restarted my kernel after installing packages from ```examples/requirements.txt``` and is fixed. Closing the issue.
transformers
2,357
closed
GLUE benchmark score for XLNet_base_cased?
## ❓ Questions & Help Can someone provide the GLUE benchmark scores for different GLUE tasks. Or a script for preforming predictions on the test files will be really helpful.
12-29-2019 03:32:27
12-29-2019 03:32:27
Have you looked at the [run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py) script ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,356
closed
GPT2 should not store/compute cached activations during finetuning
This PR tries to fix the issue with large memory usage from GPT2 during fine-tuning. ## Quick estimations @LysandreJik compared memory usage with @minimaxir GPT2-simple (https://github.com/minimaxir/gpt-2-simple): *Small model*, batch size 4, sequence length 512 (roughly similar): - us => 9.9GB, - GPT2-simple => 8.5GB Increasing to a 1024 length: - us => 20.4GB..., - GPT2-simple => still 8.5GB *Medium model*, batch size de 4, sequence length de 512 - us => 23.36GB. OOM on a titan with 1024 seq len. - GPT2-simple throws an error related to layers not contained in the checkpoint ## Possible reason Investigating our `run_lm_finetuning` script and GPT2 model showed that we are alway computing/storing cached hidden-states (which are normally only useful for decoding). This PR attempt to fix this most probable source of large memory usage. It cleans up a little bit GPT2 codebase at the same time. I haven't tried it yet on a large scale test. cc @LysandreJik @arnicas
12-28-2019 16:34:21
12-28-2019 16:34:21
Not sure which size of GPT-2 you're testing with, but the 355M version utilizes gradient checkpointing for finetuning in gpt-2-simple, which is not the case with the 124M version w/ Transformers. That might be a useful test case.<|||||>I just tried this with gpt-2-medium on my poetry dataset and have the same memory error as before. Complete info below: ```` python run_lm_finetuning.py --output_dir=output --model_type=gpt2 --model_name_or_path=gpt2-medium --do_train --train_data_file=all_gen_lines.txt --per_gpu_train_batch_size=1 12/29/2019 17:48:47 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: False 12/29/2019 17:48:47 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-config.json from cache at /home/jupyter/.cache/torch/transformers/98aa65385e18b0efd17acd8bf64dcdf21406bb0c99c801c2d3c9f6bfd1f48f29.5f9150c569dadadaa1e66830d29254aa5cf43f8ccd76dc0c81e0102c67032367 12/29/2019 17:48:47 - INFO - transformers.configuration_utils - Model config { "attn_pdrop": 0.1, "embd_pdrop": 0.1, "finetuning_task": null, "initializer_range": 0.02, "is_decoder": false, "layer_norm_epsilon": 1e-05, "n_ctx": 1024, "n_embd": 1024, "n_head": 16, "n_layer": 24, "n_positions": 1024, "n_special": 0, "num_labels": 1, "output_attentions": false, "output_hidden_states": false, "output_past": true, "predict_special_tokens": true, "pruned_heads": {}, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "torchscript": false, "use_bfloat16": false, "vocab_size": 50257 } 12/29/2019 17:48:48 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-vocab.json from cache at /home/jupyter/.cache/torch/transformers/f20f05d3ae37c4e3cd56764d48e566ea5adeba153dcee6eb82a18822c9c731ec.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71 12/29/2019 17:48:48 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-merges.txt from cache at /home/jupyter/.cache/torch/transformers/6d882670c55563617571fe0c97df88626fb5033927b40fc18a8acf98dafd4946.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda 12/29/2019 17:48:48 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-pytorch_model.bin from cache at /home/jupyter/.cache/torch/transformers/4b337a4f3b7d3e1518f799e238af607498c02938a3390152aaec7d4dabca5a02.8769029be4f66a5ae1055eefdd1d11621b901d510654266b8681719fff492d6e 12/29/2019 17:49:02 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=1024, cache_dir='', config_name='', device=device(type='cuda'), do_eval=False, do_lower_case=False, do_train=True, eval_all_checkpoints=False, eval_data_file=None, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, local_rank=-1, logging_steps=50, max_grad_norm=1.0, max_steps=-1, mlm=False, mlm_probability=0.15, model_name_or_path='gpt2-medium', model_type='gpt2', n_gpu=2, no_cuda=False, num_train_epochs=1.0, output_dir='output', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=1, save_steps=50, save_total_limit=None, seed=42, server_ip='', server_port='', tokenizer_name='', train_data_file='all_gen_lines.txt', warmup_steps=0, weight_decay=0.0) 12/29/2019 17:49:02 - INFO - __main__ - Loading features from cached file gpt2-medium_cached_lm_1024_all_gen_lines.txt.bin 12/29/2019 17:49:02 - INFO - __main__ - ***** Running training ***** 12/29/2019 17:49:02 - INFO - __main__ - Num examples = 2061 12/29/2019 17:49:02 - INFO - __main__ - Num Epochs = 1 12/29/2019 17:49:02 - INFO - __main__ - Instantaneous batch size per GPU = 1 12/29/2019 17:49:02 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 2 12/29/2019 17:49:02 - INFO - __main__ - Gradient Accumulation steps = 1 12/29/2019 17:49:02 - INFO - __main__ - Total optimization steps = 1031 Epoch: 0%| | 0/1 [00:00<?, ?it/s/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' Traceback (most recent call last): | 1/1031 [00:05<1:30:32, 5.27s/it] File "run_lm_finetuning.py", line 717, in <module> main() File "run_lm_finetuning.py", line 667, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_lm_finetuning.py", line 298, in train outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/_utils.py", line 385, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 549, in forward inputs_embeds=inputs_embeds) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 460, in forward head_mask=head_mask[i]) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 236, in forward m = self.mlp(self.ln_2(x)) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 214, in forward h = self.act(self.c_fc(x)) File "/home/jupyter/miniconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 100, in gelu return 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3)))) RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 11.17 GiB total capacity; 10.77 GiB already allocated; 14.06 MiB free; 66.92 MiB cached) Epoch: 0%| | 0/1 [00:05<?, ?it/s] Iteration: 0%| | 1/1031 [00:05<1:41:31, 5.91s/it] (base) jupyter@lynn-ukpavilion:~/code/transformers/examples$ nvidia-smi Sun Dec 29 17:49:34 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 | | N/A 49C P0 71W / 149W | 0MiB / 11441MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla K80 Off | 00000000:00:05.0 Off | 0 | | N/A 67C P0 88W / 149W | 0MiB / 11441MiB | 94% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ (base) jupyter@lynn-ukpavilion:~/code/transformers/examples$ (base) jupyter@lynn-ukpavilion:~/code/transformers/examples$ git status On branch fix-gpt2-finetuning-memory ````<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@thomwolf @LysandreJik : Curious about the status of this. It seems like the memory issues still exists with "run_lm_finetuning.py" and GPT-2. For instance, even a batch size of 1 doesn't help prevent OOM error when fine-tuning GPT-2 large with a sequence length of 1024 (despite using FP-16). Is there anything we could do here (apart from gradient checkpointing) that would make the memory usage lower as Thomas listed in his first comment above? Thanks.
transformers
2,355
closed
transformers command not found after installing transformers using pip
I wanted to convert TF checkpoints to pytorch saved files and thus I followed instructions as mentioned in the link https://huggingface.co/transformers/converting_tensorflow_models.html Thus, I installed PyTorch, Tensorflow and then transformers. But after doing so, when I ran the command, my system prompted me transformers command not found. What can be the issue here.
12-28-2019 07:35:02
12-28-2019 07:35:02
I have this problem also. Installed with pip3 (maybe this is the necessary information)<|||||>Hi @ManasRMohanty, @DaniilRoman, In 2.3.0 we introduced some new commands from the cli, which are now provided through **transformers-cli**. Can you please try the following: ```bash transformers-cli convert --model_type <model_type> --tf_checkpoint /path/to/tf_model.ckpt --config /path/to/model.json --pytorch_dump_output /path/to/pytorch_model.bin ``` Let us know :) <|||||>@mfuntowicz Do you want to update the doc? (i can do it too if needed)<|||||>@mfuntowicz @julien-c Yes, the above worked for me in linux. Thank you. Also, I checked in https://huggingface.co/transformers/converting_tensorflow_models.html and I can see that the document is updated, but the convert parameter is missing there, so please update that.<|||||>@ManasRMohanty I've updated the documentation with the missing keyword. Thanks for reporting 👍
transformers
2,354
closed
[debug] Debug Heisenbug, the old school way.
12-28-2019 05:01:15
12-28-2019 05:01:15
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2354?src=pr&el=h1) Report > Merging [#2354](https://codecov.io/gh/huggingface/transformers/pull/2354?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bfe870be654a1fc54c5479f9ad0875492d9cd959?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2354/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2354?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2354 +/- ## ======================================= Coverage 73.32% 73.32% ======================================= Files 87 87 Lines 14964 14964 ======================================= Hits 10972 10972 Misses 3992 3992 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2354?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2354?src=pr&el=footer). Last update [bfe870b...c8c4ecd](https://codecov.io/gh/huggingface/transformers/pull/2354?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
2,353
closed
[http] Tweak http user-agent
12-28-2019 04:46:07
12-28-2019 04:46:07
transformers
2,352
closed
Cli tweaks
12-28-2019 04:22:14
12-28-2019 04:22:14
Awesome!
transformers
2,351
closed
GLUE Benchmark Hyperparameters
## ❓ Questions & Help In the `run_glue.py` script, are the hyperparameters for running BERT, RoBERTa, ALBERT, etc. the exact same? The documentation does not seem to outline separate hyperparameters, but the papers of each respective model show different hyperparameter ranges. I'm wondering if this was taken account when reporting the benchmark results.
12-28-2019 00:12:42
12-28-2019 00:12:42
Hi, each result available on the [example page](https://huggingface.co/transformers/examples.html) shows the command that was used, displaying the hyper-parameters that are different from the defaults.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,350
closed
Trouble fine tuning BERT language model
## 🐛 Bug Hello, I'm having trouble running **run_lm_finetuning.py** script. I'm using pytorch 1.2, python 3.5, CUDA 9.2, Ubuntu 18.04. When I run ``` $python run_lm_finetuning.py --output_dir= my_output_dir/ --model_type=bert --model_name_or_path=bert-base-uncased --do_train --train_data_file=$TRAIN_FILE --do_eval --eval_data_file=$TEST_FILE --mlm ``` I obtain: ``` ***** Running training ***** Num examples = 4517 Num Epochs = 1 Instantaneous batch size per GPU = 4 Total train batch size (w. parallel, distributed & accumulation) = 4 Gradient Accumulation steps = 1 Total optimization steps = 1130 Epoch: 0%| | 0/1 [00:00<?, ?it/s] Traceback (most recent call last): | 0/1130 [00:00<?, ?it/s] File "lm_fine_backup.py", line 712, in <module> main() File "lm_fine_backup.py", line 662, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "lm_fine_backup.py", line 298, in train outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) File "/home/casapanshop/anaconda2/envs/py3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/mbugueno/.local/lib/python3.5/site-packages/transformers/modeling_bert.py", line 899, in forward masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1)) File "/home/casapanshop/anaconda2/envs/py3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/casapanshop/anaconda2/envs/py3/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 916, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/home/casapanshop/anaconda2/envs/py3/lib/python3.5/site-packages/torch/nn/functional.py", line 2009, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/home/casapanshop/anaconda2/envs/py3/lib/python3.5/site-packages/torch/nn/functional.py", line 1838, in nll_loss ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /pytorch/aten/src/THNN/generic/ClassNLLCriterion.c:97 ``` I'm using the official example script in WikiText-2 dataset. Any insights?
12-27-2019 23:16:19
12-27-2019 23:16:19
I have the same question but no answer. In my case, I ran it in google colab and used easydict to deal with arg parser. /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1836 .format(input.size(0), target.size(0))) 1837 if dim == 2: -> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /pytorch/aten/src/THNN/generic/ClassNLLCriterion.c:97<|||||>I faced this problem when I checkout the latest code, but it is worked when I checkout v2.3.0 version.<|||||>facing the same problem<|||||>works fine when i build from source <|||||>I assume you installed the transformers with pip install, there is a bug in roberta, you can manually fix it by editing transformers/modeling_roberta.py file in Line 291 - https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_roberta.py#L291 Change: `loss_fct = CrossEntropyLoss(-1)` to `loss_fct = CrossEntropyLoss()`<|||||>Thank you guys! Specially to @orena1 ! Change the definition of the loss function (CrossEntropyLoss(-1)) worked for me. Sorry for the late! but I'm very happy, I can finally fine-tune the model! hahaha
transformers
2,349
closed
Enforce target version for black.
This should stabilize formatting. As suggested by @julien-c.
12-27-2019 21:49:14
12-27-2019 21:49:14
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=h1) Report > Merging [#2349](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bfe870be654a1fc54c5479f9ad0875492d9cd959?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2349/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2349 +/- ## ======================================= Coverage 73.32% 73.32% ======================================= Files 87 87 Lines 14964 14964 ======================================= Hits 10972 10972 Misses 3992 3992 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `89.9% <ø> (ø)` | :arrow_up: | | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `93.93% <ø> (ø)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.34% <ø> (ø)` | :arrow_up: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `36.76% <ø> (ø)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `96.2% <ø> (ø)` | :arrow_up: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `89.1% <ø> (ø)` | :arrow_up: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `68.31% <ø> (ø)` | :arrow_up: | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `25.3% <ø> (ø)` | :arrow_up: | | [src/transformers/tokenization\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `35.71% <ø> (ø)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `87.84% <ø> (ø)` | :arrow_up: | | ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=footer). Last update [bfe870b...238a778](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
2,348
closed
CamembertForQuestionAnswering
Hi, is it possibile tu add _CamembertForQuestionAnswering_ class that extend _RobertaForQuestionAnswering_ into **src/transformers/modeling_camembert.py**, **src/transformers/__init__.py** and **examples/run_squad.py** ? I had to manually force it in order to execute _run_squad.py_ with a CamemBERT-like network. Thanks.
12-27-2019 21:26:47
12-27-2019 21:26:47
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,347
closed
revise T5 code to support one step decoding during generation
@thomwolf Hi, I'm new to contribute to this project. I revised the T5 code to support one step decoding during generation based on your implementation. Besides adding `decode_step` function, I also revised some others to pass in `cache` variable. Meanwhile, I added `bos_token` in tokenizer_t5 so that `bos_token` can be used as the first token during decoding. The example usage is as follows: ``` Examples: encoder_hidden_states = model.encode(input_ids) cache = model.init_state_from_encoder(encoder_hidden_states) next_token = input_ids.new_full((batch_size, 1), tokenizer.bos_token_id) generated = [next_token] for i in range(100): output, cache = model.decode_step(cache, input_ids=next_token) next_token = torch.argmax(logits, dim=-1).unsqueeze(-1) generated += [next_token] generated = torch.cat(generated, dim=1).tolist() ``` Let me know whether it is useful and can be merged.
12-27-2019 19:33:44
12-27-2019 19:33:44
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,346
closed
Why does the BertForQuestionAnswering sample code duplicate the [CLS] token?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> The BertForQuestionAnswering sample code creates duplicate [CLS] tokens. Wondering why: ``` tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad') question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" input_text = "[CLS] " + question + " [SEP] " + text + " [SEP]" input_ids = tokenizer.encode(input_text) token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))] start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids])) all_tokens = tokenizer.convert_ids_to_tokens(input_ids) print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])) # a nice puppet tokenizer.decode(input_ids) #'[CLS] [CLS] who was jim henson? [SEP] jim henson was a nice puppet [SEP] [SEP]' ``` If I remove the extra [CLS], the extraction doesn't work. It's exactly two tokens off: ``` input_ids = tokenizer.encode(input_text, add_special_tokens=False) ...rerun same code as above... print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])) # was a ``` What am I doing wrong? How can I get the extraction working without duplicate [CLS] tokens? (and duplicate final [SEP] tokens BTW). The sample code comes right from the docs: https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering
12-27-2019 15:56:48
12-27-2019 15:56:48
Indeed, this is a mistake, thank you for raising an issue. It should have been fixed with 74755c89b92e0c0c027221c13fd034afed4d2136.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,345
closed
Feature Request: Pipeline for Query/Document relevance
# Pipelines for IR tasks ## Justification In the last few years, a bunch of deep architectures were proposed for Ad-hoc retrieval, most with limited success (if any). However, BERT(et al)-based models are finally pushing the state of the art for Ad-hoc retrieval. In fact in the last TREC had a [Deep Learning track](https://microsoft.github.io/TREC-2019-Deep-Learning/) whre "NNLM" (neural network language models) [dominated](https://twitter.com/UnderdogGeek/status/1206595356017848324/photo/1) any other traditional (Mostly BM25 and variations) and other deep models. So, it's a current trend that BERT should be the new baseline for any proposed model for IR. ## Description There should be a [pipeline-like](https://github.com/huggingface/transformers#quick-tour-of-pipelines) feature that is able to score pairs of documents and user queries. Probably, pre-trained on a dataset like the [MSMarco dataset for TREC'19](https://microsoft.github.io/TREC-2019-Deep-Learning/). Ideally, this would also support a list of documents to rank and return scores. In real-life applications, one would probably want to combine BERT scores with a traditional baseline scores (like QL or BM25). So, the score is needed (or, even better, combine something like [pyserini](https://github.com/castorini/pyserini) in the backend?) ## Usage ``` from transformers import pipeline # Allocate a pipeline for sentiment-analysis nlp = pipeline('document-relevancy') nlp({ 'query 'can hives be a sign of pregnancy', 'context' '<document content>' }) >>> {'score': 0.28756016668193496'} ``` I have already used DistilBERT on a paper to appear on ECIR2020 (Diagnosing BERT with Retrieval Heuristics), and would be able to contribute the model for this (even for bert-base). I would also love to contribute with this, but will probably need some guidance, if anyone is willing to help. Thanks!
12-27-2019 15:44:38
12-27-2019 15:44:38
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>unstale because this is very interesting<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,344
closed
How to run bert without checkpoints
I would like to run BERT from scratch with no checkpoints for my language (PT-BR) and make a comparison with the multilingual model! I am currently running BERT-Native provided by google to get checkpoints from scratch and then converting to pytorch. But it is a time consuming process! can anybody help me?
12-27-2019 14:35:26
12-27-2019 14:35:26
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,343
closed
How to finetune PreTrainedEncoderDecoder
## ❓ Questions & Help <!-- A clear and concise description of the question. --> PreTrainedEncoderDecoder is great. Now I have the following questions : (1) How to use my data to finetune the PreTrainedEncoderDecoder? (2) If I want to use the pretrained RoBERTa as encoder and deocder, what should I do ?
12-27-2019 12:34:52
12-27-2019 12:34:52
Hi, Thanks for the nice work. I have the same question. Would appreciate your reply. <|||||>Yes , example would be nice<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,342
closed
Tokenizers as optional dependency
- `tokenizers` as an optional dependency (`pip install -e .[fast]`) - code formating with `make style` `make quality`
12-27-2019 12:15:53
12-27-2019 12:15:53
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,341
closed
"Reformer: The Efficient Transformer" looks awesome. I'd love to see it in the library.
# 🌟New model addition ## Model description Efficient Transformer with locality-sensitive hashing and reversible layers https://openreview.net/forum?id=rkgNKkHtvB <!-- Important information --> ## Open Source status * [. ] the model implementation is available: (give details) There is an implementation from google https://github.com/google/trax/blob/master/trax/models/research/reformer.py * [ ] the model weights are available: (give details) * [. ] who are the authors: (mention them) Nikita Kitaev, Lukasz Kaiser, Anselm Levskaya ## Additional context <!-- Add any other context about the problem here. -->
12-27-2019 06:29:22
12-27-2019 06:29:22
I have started to refactor the original source code in Pytorch if you'd like to help I'd greatly appreciate it! [https://github.com/zbloss/reformer](https://github.com/zbloss/reformer)<|||||>I have a working implementation at https://github.com/lucidrains/reformer-pytorch !<|||||>Any update on adding this to the library?<|||||>They published in their blog about it https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html?m=1<|||||>Hence my interest in a huggingface implementation :) <|||||>Looking forward to see this model in the transformers lib :)<|||||>Don't think we should rush this one. The reformer paper is pretty tricky to implement in a clean way, plus there aren't any pre-trained models that use it yet. Just one person's opinion, though.<|||||>The implementation by @lucidrains seems to work https://github.com/lucidrains/reformer-pytorch ; it'd be cool if it was included in the transformers library. It seems strange to me that no pretrained Reformer has been uploaded since the paper was released, any ideas why? is it possible that it doesn't work in practice as stated by the authors in the paper? Anyone who has trained a Reformer on their own and have tried it to solve a real problem? Thank you very much in advance<|||||>Same here, curious to know why. Thank you!<|||||>https://github.com/google/trax/blob/master/trax/models/reformer/machine_translation.ipynb There should be an pretrained model now. Would be very happy to see Reformer model in this project.<|||||>+1<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Closed by @patrickvonplaten <|||||>has this been done ?
transformers
2,340
closed
Bert cross attention
## ❓ Questions & Help In the standard Transformer/Bert architecture, what is the intuition behind cross attention doing a weighted average over the encoder_hidden_states? What happens if we set the value layer to decoder hidden_state instead? See the cross attention value layer being set to the encoder_hidden_states at [modeling_bert.py#L238](https://github.com/huggingface/transformers/blob/8c67b529f615cc24c46864b8323d2d47a15ccd58/src/transformers/modeling_bert.py#L238), and the weighted average being taken at [modeling_bert.py#L266](https://github.com/huggingface/transformers/blob/8c67b529f615cc24c46864b8323d2d47a15ccd58/src/transformers/modeling_bert.py#L266)
12-27-2019 05:37:53
12-27-2019 05:37:53
I would imagine the idea is to incorporate a strong presence of the encoder hidden states - else the conditioning on the encoder might be weak. We do an attention without the encoder hidden states anyways before the cross attention.
transformers
2,339
closed
read each lines, require less memory
The original code reads whole data at once, so it requires so much memory to handle huge corpus. This change is: * read corpus by each lines * flatten 2-dimension array by itertools.chain, it requies less memory and fast
12-27-2019 04:51:17
12-27-2019 04:51:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=h1) Report > Merging [#2339](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/537a1de53d824b5851bce32cb5eafaef3f9ce5ef?src=pr&el=desc) will **increase** coverage by `1.11%`. > The diff coverage is `75.31%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2339/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2339 +/- ## ========================================= + Coverage 73.49% 74.6% +1.11% ========================================= Files 87 87 Lines 14793 14802 +9 ========================================= + Hits 10872 11043 +171 + Misses 3921 3759 -162 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100% <ø> (ø)` | :arrow_up: | | [src/transformers/tokenization\_bert\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `67.46% <ø> (ø)` | :arrow_up: | | [src/transformers/configuration\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21tYnQucHk=) | `55.55% <ø> (ø)` | :arrow_up: | | [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0% <ø> (ø)` | :arrow_up: | | [src/transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9jb252ZXJ0LnB5) | `0% <ø> (ø)` | :arrow_up: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `36.76% <0%> (ø)` | :arrow_up: | | [src/transformers/modeling\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `17.6% <0%> (ø)` | :arrow_up: | | [src/transformers/tokenization\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `35.71% <0%> (ø)` | :arrow_up: | | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `27.86% <0%> (ø)` | :arrow_up: | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `25.3% <0%> (ø)` | :arrow_up: | | ... and [72 more](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=footer). Last update [537a1de...9166b24](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>We recently merged a `LineByLineTextDataset` that should be equivalent: https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L124 Feedback welcome.
transformers
2,338
closed
Summarization ROGUE scores don't equal that of the paper ...
## ❓ Questions & Help Just ran the `run_summarization.py` script, with the parameters specified [here](https://github.com/huggingface/transformers/tree/master/examples/summarization) and the ROGUE scores are far off from what is reported in the related paper. The ROGUE scores reported in [PreSumm paper](https://github.com/nlpyang/PreSumm) (R1, R2, RL): > BertSumExtAbs | 42.13 | 19.60 | 39.18 The ROGUE scores after running the HF script: > ROGUE 1: > F1 = .275 > Precision = .299 > Recall = .260 > > ROGUE 2: > F1 = .161 > Precision = .184 > Recall = .149 > > ROGUE L: > F1 = .305 > Precision = .326 > Recall = .290 The README file seems to suggest that running the script as is, with all the stories in a single directory, will give you ROGUE scores similar to that of the paper. That doesn't seem the case. ***Any ideas why? Or what I may be doing wrong here?*** Thanks much! FYI ... ran the script as in the README: ``` python run_summarization.py \ --documents_dir $STORIES_DIR --summaries_output_dir $OUTPUT_SUM_DIR --no_cuda false \ --batch_size 4 \ --min_length 50 \ --max_length 200 \ --beam_size 5 \ --alpha 0.95 \ --block_trigram true \ --compute_rouge true ```
12-26-2019 23:46:57
12-26-2019 23:46:57
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,337
closed
Dropout rates to be updated in all ALBERT v2 configs
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): ALBERT v2 Language I am using the model on (English, Chinese....): English The problem arise when using: * [x] the official example scripts: run_squad.py * [ ] my own modified scripts: The tasks I am working on is: * [x] an official GLUE/SQUaD task: SQuAD v2.0 * [ ] my own task or dataset: ## To Reproduce Steps to reproduce the behavior: 1. Just choose one of the following default ALBERT v2 configs: base, large, xlarge <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior As stated in the updated ALBERT Tensorflow repo from Google Research, model v2 introduces no dropout at all for downstream tasks, like SQuAD, MRPC or COLA. Following the discussing from this [issue](https://github.com/google-research/ALBERT/issues/23), model configurations on TF-Hub were wrong and thus, the ones used by transformers (loaded on Amazon S3). <s>While configs on TF-Hub will be updated in the near future</s> As configs on TF-Hub have already been updated, transformers' ones should be updated too: parameters `attention_probs_dropout_prob` and `hidden_dropout_prob` should be both 0 for all v2 configs. ## Environment * OS: Platform Linux-4.14.152-98.182.amzn1.x86_64-x86_64-with-glibc2.9 * Python version: Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) * PyTorch version: PyTorch 1.3.1 * PyTorch Transformers version (or branch): * Using GPU ? YES, 4x NVIDIA V100 16GB * Distributed of parallel setup ? No * Any other relevant information: Using Amazon AWS Deep Learning Linux AMI 26.0 ## Additional context Hope this will be useful!
12-26-2019 21:04:17
12-26-2019 21:04:17
I found it already been updated: https://s3.amazonaws.com/models.huggingface.co/bert/albert-xxlarge-v2-config.json Did I miss something?<|||||>Base, large and xlarge v2 configs have to be updated too, as confirmed by this [issue](https://github.com/google-research/ALBERT/issues/23) in the official Google Research repository. <|||||>Thanks. Will pay attention to this one. Hope it will be fix soon.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,336
closed
TypeError: Expected Operation, Variable, or Tensor, got None while saving tensorflow model
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): TFAlbert Language I am using the model on (English, Chinese....): English The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name): GLUE * [ ] my own task or dataset: (give details) ## Environment * OS: Linux * Python version: Python 3.7.5 / * Tensorflow version: 2.0.0 * Using GPU ? GPU **System information** - Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux - TensorFlow installed from (source or binary): Source - TensorFlow version: '2.0.0' - Python version: Python 3.7.5 /Conda - CUDA/cuDNN version: cuda10.0_0/cudnn-7.6.5 - GPU model and memory: Tesla V100-PCIE / 32 GB memory **Describe the current behavior** **I am getting TypeError: Expected Operation, Variable, or Tensor, got None while saving the model using model.save('../output/my_model')** --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-49-5ab71d0ebc23> in <module> ----> 1 model.save('../output/my_model') /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options) 973 """ 974 saving.save_model(self, filepath, overwrite, include_optimizer, save_format, --> 975 signatures, options) 976 977 def save_weights(self, filepath, overwrite=True, save_format=None): /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options) 113 else: 114 saved_model_save.save(model, filepath, overwrite, include_optimizer, --> 115 signatures, options) 116 117 /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/save.py in save(model, filepath, overwrite, include_optimizer, signatures, options) 72 # default learning phase placeholder. 73 with K.learning_phase_scope(0): ---> 74 save_lib.save(model, filepath, signatures, options) 75 76 if not include_optimizer: /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/saved_model/save.py in save(obj, export_dir, signatures, options) 868 if signatures is None: 869 signatures = signature_serialization.find_function_to_export( --> 870 checkpoint_graph_view) 871 872 signatures = signature_serialization.canonicalize_signatures(signatures) /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/saved_model/signature_serialization.py in find_function_to_export(saveable_view) 62 # If the user did not specify signatures, check the root object for a function 63 # that can be made into a signature. ---> 64 functions = saveable_view.list_functions(saveable_view.root) 65 signature = functions.get(DEFAULT_SIGNATURE_ATTR, None) 66 if signature is not None: /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/saved_model/save.py in list_functions(self, obj) 139 if obj_functions is None: 140 obj_functions = obj._list_functions_for_serialization( # pylint: disable=protected-access --> 141 self._serialization_cache) 142 self._functions[obj] = obj_functions 143 return obj_functions /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in _list_functions_for_serialization(self, serialization_cache) 2420 def _list_functions_for_serialization(self, serialization_cache): 2421 return (self._trackable_saved_model_saver -> 2422 .list_functions_for_serialization(serialization_cache)) 2423 2424 /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/base_serialization.py in list_functions_for_serialization(self, serialization_cache) 89 `ConcreteFunction`. 90 """ ---> 91 fns = self.functions_to_serialize(serialization_cache) 92 93 # The parent AutoTrackable class saves all user-defined tf.functions, and /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/layer_serialization.py in functions_to_serialize(self, serialization_cache) 77 def functions_to_serialize(self, serialization_cache): 78 return (self._get_serialized_attributes( ---> 79 serialization_cache).functions_to_serialize) 80 81 def _get_serialized_attributes(self, serialization_cache): /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/layer_serialization.py in _get_serialized_attributes(self, serialization_cache) 92 93 object_dict, function_dict = self._get_serialized_attributes_internal( ---> 94 serialization_cache) 95 96 serialized_attr.set_and_validate_objects(object_dict) /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/model_serialization.py in _get_serialized_attributes_internal(self, serialization_cache) 45 # cache (i.e. this is the root level object). 46 if len(serialization_cache[constants.KERAS_CACHE_KEY]) == 1: ---> 47 default_signature = save_impl.default_save_signature(self.obj) 48 49 # Other than the default signature function, all other attributes match with /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/save_impl.py in default_save_signature(layer) 204 original_losses = _reset_layer_losses(layer) 205 fn = saving_utils.trace_model_call(layer) --> 206 fn.get_concrete_function() 207 _restore_layer_losses(original_losses) 208 return fn /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in get_concrete_function(self, *args, **kwargs) 774 if self._stateful_fn is None: 775 initializer_map = object_identity.ObjectIdentityDictionary() --> 776 self._initialize(args, kwargs, add_initializers_to=initializer_map) 777 self._initialize_uninitialized_variables(initializer_map) 778 /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 406 self._concrete_stateful_fn = ( 407 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access --> 408 *args, **kwds)) 409 410 def invalid_creator_scope(*unused_args, **unused_kwds): /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 1846 if self.input_signature: 1847 args, kwargs = None, None -> 1848 graph_function, _, _ = self._maybe_define_function(args, kwargs) 1849 return graph_function 1850 /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _maybe_define_function(self, args, kwargs) 2148 graph_function = self._function_cache.primary.get(cache_key, None) 2149 if graph_function is None: -> 2150 graph_function = self._create_graph_function(args, kwargs) 2151 self._function_cache.primary[cache_key] = graph_function 2152 return graph_function, args, kwargs /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 2039 arg_names=arg_names, 2040 override_flat_arg_shapes=override_flat_arg_shapes, -> 2041 capture_by_value=self._capture_by_value), 2042 self._function_attributes, 2043 # Tell the ConcreteFunction to clean up its graph once it goes out of /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 913 converted_func) 914 --> 915 func_outputs = python_func(*func_args, **func_kwargs) 916 917 # invariant: `func_outputs` contains only Tensors, CompositeTensors, /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args, **kwds) 356 # __wrapped__ allows AutoGraph to swap in a converted function. We give 357 # the function a weak reference to itself to avoid a reference cycle. --> 358 return weak_wrapped_fn().__wrapped__(*args, **kwds) 359 weak_wrapped_fn = weakref.ref(wrapped_fn) 360 /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saving_utils.py in _wrapped_model(*args) 141 with base_layer_utils.call_context().enter( 142 model, inputs=inputs, build_graph=False, training=False, saving=True): --> 143 outputs_list = nest.flatten(model(inputs=inputs, training=False)) 144 145 try: /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs) 845 outputs = base_layer_utils.mark_as_return(outputs, acd) 846 else: --> 847 outputs = call_fn(cast_inputs, *args, **kwargs) 848 849 except errors.OperatorNotAllowedInGraphError as e: /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs) 290 def wrapper(*args, **kwargs): 291 with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.DISABLED): --> 292 return func(*args, **kwargs) 293 294 if inspect.isfunction(func) or inspect.ismethod(func): /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/transformers/modeling_tf_albert.py in call(self, inputs, **kwargs) 783 784 def call(self, inputs, **kwargs): --> 785 outputs = self.albert(inputs, **kwargs) 786 787 pooled_output = outputs[1] /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs) 845 outputs = base_layer_utils.mark_as_return(outputs, acd) 846 else: --> 847 outputs = call_fn(cast_inputs, *args, **kwargs) 848 849 except errors.OperatorNotAllowedInGraphError as e: /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs) 290 def wrapper(*args, **kwargs): 291 with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.DISABLED): --> 292 return func(*args, **kwargs) 293 294 if inspect.isfunction(func) or inspect.ismethod(func): /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/transformers/modeling_tf_albert.py in call(self, inputs, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, training) 680 681 embedding_output = self.embeddings( --> 682 [input_ids, position_ids, token_type_ids, inputs_embeds], training=training) 683 encoder_outputs = self.encoder( 684 [embedding_output, extended_attention_mask, head_mask], training=training) /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs) 889 with base_layer_utils.autocast_context_manager( 890 self._compute_dtype): --> 891 outputs = self.call(cast_inputs, *args, **kwargs) 892 self._handle_activity_regularization(inputs, outputs) 893 self._set_mask_metadata(inputs, outputs, input_masks) /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in return_outputs_and_add_losses(*args, **kwargs) 55 inputs = args[inputs_arg_index] 56 args = args[inputs_arg_index + 1:] ---> 57 outputs, losses = fn(inputs, *args, **kwargs) 58 layer.add_loss(losses, inputs) 59 return outputs /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs) 109 training, 110 lambda: replace_training_and_call(True), --> 111 lambda: replace_training_and_call(False)) 112 113 # Create arg spec for decorated function. If 'training' is not defined in the /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/tf_utils.py in smart_cond(pred, true_fn, false_fn, name) 57 pred, true_fn=true_fn, false_fn=false_fn, name=name) 58 return smart_module.smart_cond( ---> 59 pred, true_fn=true_fn, false_fn=false_fn, name=name) 60 61 /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name) 54 return true_fn() 55 else: ---> 56 return false_fn() 57 else: 58 return control_flow_ops.cond(pred, true_fn=true_fn, false_fn=false_fn, /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in <lambda>() 109 training, 110 lambda: replace_training_and_call(True), --> 111 lambda: replace_training_and_call(False)) 112 113 # Create arg spec for decorated function. If 'training' is not defined in the /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in replace_training_and_call(training) 104 def replace_training_and_call(training): 105 set_training_arg(training, training_arg_index, args, kwargs) --> 106 return wrapped_call(*args, **kwargs) 107 108 return tf_utils.smart_cond( /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/save_impl.py in __call__(self, *args, **kwargs) 531 if not self.call_collection.tracing: 532 self.call_collection.add_trace(*args, **kwargs) --> 533 return super(LayerCall, self).__call__(*args, **kwargs) 534 535 def get_concrete_function(self, *args, **kwargs): /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds) 455 456 tracing_count = self._get_tracing_count() --> 457 result = self._call(*args, **kwds) 458 if tracing_count == self._get_tracing_count(): 459 self._call_counter.called_without_tracing() /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds) 492 # In this case we have not created variables on the first call. So we can 493 # run the first trace but we should fail if variables are created. --> 494 results = self._stateful_fn(*args, **kwds) 495 if self._created_variables: 496 raise ValueError("Creating variables on a non-first call to a function" /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in __call__(self, *args, **kwargs) 1820 def __call__(self, *args, **kwargs): 1821 """Calls a graph function specialized to the inputs.""" -> 1822 graph_function, args, kwargs = self._maybe_define_function(args, kwargs) 1823 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access 1824 /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _maybe_define_function(self, args, kwargs) 2148 graph_function = self._function_cache.primary.get(cache_key, None) 2149 if graph_function is None: -> 2150 graph_function = self._create_graph_function(args, kwargs) 2151 self._function_cache.primary[cache_key] = graph_function 2152 return graph_function, args, kwargs /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 2039 arg_names=arg_names, 2040 override_flat_arg_shapes=override_flat_arg_shapes, -> 2041 capture_by_value=self._capture_by_value), 2042 self._function_attributes, 2043 # Tell the ConcreteFunction to clean up its graph once it goes out of /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 913 converted_func) 914 --> 915 func_outputs = python_func(*func_args, **func_kwargs) 916 917 # invariant: `func_outputs` contains only Tensors, CompositeTensors, /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args, **kwds) 356 # __wrapped__ allows AutoGraph to swap in a converted function. We give 357 # the function a weak reference to itself to avoid a reference cycle. --> 358 return weak_wrapped_fn().__wrapped__(*args, **kwds) 359 weak_wrapped_fn = weakref.ref(wrapped_fn) 360 /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs) 513 layer, inputs=inputs, build_graph=False, training=training, 514 saving=True): --> 515 ret = method(*args, **kwargs) 516 _restore_layer_losses(original_losses) 517 return ret /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs) 109 training, 110 lambda: replace_training_and_call(True), --> 111 lambda: replace_training_and_call(False)) 112 113 # Create arg spec for decorated function. If 'training' is not defined in the /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/tf_utils.py in smart_cond(pred, true_fn, false_fn, name) 57 pred, true_fn=true_fn, false_fn=false_fn, name=name) 58 return smart_module.smart_cond( ---> 59 pred, true_fn=true_fn, false_fn=false_fn, name=name) 60 61 /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name) 54 return true_fn() 55 else: ---> 56 return false_fn() 57 else: 58 return control_flow_ops.cond(pred, true_fn=true_fn, false_fn=false_fn, /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in <lambda>() 109 training, 110 lambda: replace_training_and_call(True), --> 111 lambda: replace_training_and_call(False)) 112 113 # Create arg spec for decorated function. If 'training' is not defined in the /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in replace_training_and_call(training) 104 def replace_training_and_call(training): 105 set_training_arg(training, training_arg_index, args, kwargs) --> 106 return wrapped_call(*args, **kwargs) 107 108 return tf_utils.smart_cond( /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/save_impl.py in call_and_return_conditional_losses(inputs, *args, **kwargs) 555 layer_call = _get_layer_call_method(layer) 556 def call_and_return_conditional_losses(inputs, *args, **kwargs): --> 557 return layer_call(inputs, *args, **kwargs), layer.get_losses_for(inputs) 558 return _create_call_fn_decorator(layer, call_and_return_conditional_losses) 559 /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in get_losses_for(self, inputs) 1382 losses = [l for l in self.losses if not l._unconditional_loss] 1383 inputs = nest.flatten(inputs) -> 1384 reachable = tf_utils.get_reachable_from_inputs(inputs, losses) 1385 return [l for l in losses if l in reachable] 1386 /app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/tf_utils.py in get_reachable_from_inputs(inputs, targets) 132 outputs = x.consumers() 133 else: --> 134 raise TypeError('Expected Operation, Variable, or Tensor, got ' + str(x)) 135 136 for y in outputs: TypeError: Expected Operation, Variable, or Tensor, got None **Describe the expected behavior** **Code to reproduce the issue** `import tensorflow as tf` `import pandas as pd` `from sklearn.model_selection import train_test_split` `import transformers` `from transformers import AlbertConfig` `from transformers import AlbertTokenizer` `from transformers import TFAlbertForSequenceClassification` `from transformers import glue_convert_examples_to_features` `data_df = pd.read_excel("../input/test.xlsx")` `model_dir = '../input/albert_xxlarge_v2/'` `EPOCHS = 3` `MAX_SEQ_LENGTH = 256` `label_list = [0,1]` `config = AlbertConfig.from_pretrained('albert-xxlarge-v2')` `tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2', cache_dir=model_dir)` `model = TFAlbertForSequenceClassification.from_pretrained('albert-xxlarge-v2', ``cache_dir=model_dir, config=config)` `train_df, test_df = train_test_split(data_df[['id','text1', 'text2', 'LABEL']], random_state=42, shuffle=True, test_size=0.20, stratify=data_df['LABEL'])` `train_InputExamples = train_df.apply(lambda x: InputExample(guid=x['id'], text_a=x['text1'], text_b=x['text2'], label=x['LABEL']), axis=1)` `train_dataset = glue_convert_examples_to_features(examples=train_InputExamples, tokenizer=tokenizer, max_length=MAX_SEQ_LENGTH, label_list = label_list, output_mode="classification")` `optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08)` `loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)` `metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')` `input_ids_train = []` `attention_mask_train = []` `token_type_ids_train = []` `output_label_train = []` `for f in train_dataset:` `input_ids_train.append(f.input_ids)` `attention_mask_train.append(f.attention_mask)` `token_type_ids_train.append(f.token_type_ids)` `output_label_train.append(f.label)` `model.compile(optimizer=optimizer, loss=loss, metrics=[metric])` `input_ids_train = np.array(input_ids_train)` `attention_mask_train = np.array(attention_mask_train)` `token_type_ids_train = np.array(token_type_ids_train)` `output_label_train = np.array(output_label_train)` `model.fit([input_ids_train,attention_mask_train, token_type_ids_train], y=output_label_train, epochs = EPOCHS, batch_size=4)` `model.save('../output/my_model')`
12-26-2019 18:56:32
12-26-2019 18:56:32
The training of the model is successful, but getting errors only while saving the model<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,335
closed
XLNet and RoBERTa embeddings
Referring to Jay Alammar's awesome blog post wherein he showed how to create sentence embeddings from BERT (DistilBert as well), can we use the workings he showed here for XLNet and RoBERTa models as well? http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/ I was thinking majorly to use everything same what he did for RoBERTa assuming `<s>` token in RoBERTa contains the classification output and change following lines for XLNet since `<cls>` is at the end of the sequence length unlike in case of BERT where it is at the beginning `features = last_hidden_states[0][:,-1,:].numpy()` Any idea if my assumptions are correct?
12-26-2019 18:31:55
12-26-2019 18:31:55
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,334
closed
relativeattentionbias.weight in block 0 EncDecAttention of T5 Model not in original tf model. Where do we get it from?
## ❓ Questions & Help Hi, I was comparing the weights in original tf model and the pytorch t5 model and it looks like there is an extra embedding in the EncDecAttention layer (layer_1) in block_0 (relative_attention_bias.weight). I could find and compare the other embedding weights in the model but not this particular one. Was this new parameter randomly initialized using the initializer and stored as is for pre-trained model or was it fine-tuned somehow and stored?
12-26-2019 17:45:11
12-26-2019 17:45:11
It should also be in the TF version, this is the shared relative attention bias (shared among layers). Do you want to give more details on how you compared both lists of weights and what make you think it's missing?<|||||>Sure. By the way, when we say the TF version, I mean the weights released by Google. So for the TF weights, here's what I do for `T5-Small`: TF: ``` import tensorflow as tf import pprint # to prettify prints var_list = tf.train.list_variables("/path/to/stored/T5/weights") # basically replicated directory from google cloud pprint.pprint(var_list) ``` Pytorch: ``` import transformers import pprint from transformers import T5Model model = T5Model.from_pretrained('t5-small') pytorch_var_list = [x[0] for x in model.named_parameters()] # get names cause we only use them pprint.pprint(pytorch_var_list) ``` TF output for the `small` version looks something like: ``` [('decoder/block_000/layer_000/SelfAttention/k', [512, 512]), ('decoder/block_000/layer_000/SelfAttention/k_slot_vc', [512]), ('decoder/block_000/layer_000/SelfAttention/k_slot_vr', [512]), ('decoder/block_000/layer_000/SelfAttention/o', [512, 512]), ('decoder/block_000/layer_000/SelfAttention/o_slot_vc', [512]), ('decoder/block_000/layer_000/SelfAttention/o_slot_vr', [512]), ('decoder/block_000/layer_000/SelfAttention/q', [512, 512]), ('decoder/block_000/layer_000/SelfAttention/q_slot_vc', [512]), ('decoder/block_000/layer_000/SelfAttention/q_slot_vr', [512]), ('decoder/block_000/layer_000/SelfAttention/relative_attention_bias', [8, 32]), ('decoder/block_000/layer_000/SelfAttention/relative_attention_bias_slot_v', [8, 32]), ('decoder/block_000/layer_000/SelfAttention/v', [512, 512]), ('decoder/block_000/layer_000/SelfAttention/v_slot_vc', [512]), ('decoder/block_000/layer_000/SelfAttention/v_slot_vr', [512]), ('decoder/block_000/layer_000/layer_norm/scale', [512]), ('decoder/block_000/layer_000/layer_norm/scale_slot_v', [512]), ('decoder/block_000/layer_001/EncDecAttention/k', [512, 512]), ('decoder/block_000/layer_001/EncDecAttention/k_slot_vc', [512]), ('decoder/block_000/layer_001/EncDecAttention/k_slot_vr', [512]), ('decoder/block_000/layer_001/EncDecAttention/o', [512, 512]), ('decoder/block_000/layer_001/EncDecAttention/o_slot_vc', [512]), ('decoder/block_000/layer_001/EncDecAttention/o_slot_vr', [512]), ('decoder/block_000/layer_001/EncDecAttention/q', [512, 512]), ('decoder/block_000/layer_001/EncDecAttention/q_slot_vc', [512]), ('decoder/block_000/layer_001/EncDecAttention/q_slot_vr', [512]), # --------------------------------- Note: No relative_attention_bias in layer_001 ('decoder/block_000/layer_001/EncDecAttention/v', [512, 512]), ('decoder/block_000/layer_001/EncDecAttention/v_slot_vc', [512]), ('decoder/block_000/layer_001/EncDecAttention/v_slot_vr', [512]), ('decoder/block_000/layer_001/layer_norm/scale', [512]), ('decoder/block_000/layer_001/layer_norm/scale_slot_v', [512]), ('decoder/block_000/layer_002/DenseReluDense/wi/kernel', [512, 2048]), ('decoder/block_000/layer_002/DenseReluDense/wi/kernel_slot_vc', [2048]), ('decoder/block_000/layer_002/DenseReluDense/wi/kernel_slot_vr', [512]), ('decoder/block_000/layer_002/DenseReluDense/wo/kernel', [2048, 512]), ('decoder/block_000/layer_002/DenseReluDense/wo/kernel_slot_vc', [2048]), ('decoder/block_000/layer_002/DenseReluDense/wo/kernel_slot_vr', [512]), ('decoder/block_000/layer_002/layer_norm/scale', [512]), ('decoder/block_000/layer_002/layer_norm/scale_slot_v', [512]), ('decoder/block_001/layer_000/SelfAttention/k', [512, 512]), ... ... # Similar weights for all the other decoder blocks ... ('decoder/block_005/layer_002/layer_norm/scale_slot_v', [512]), ('decoder/final_layer_norm/scale', [512]), ('decoder/final_layer_norm/scale_slot_v', [512]), ('encoder/block_000/layer_000/SelfAttention/k', [512, 512]), ('encoder/block_000/layer_000/SelfAttention/k_slot_vc', [512]), ('encoder/block_000/layer_000/SelfAttention/k_slot_vr', [512]), ('encoder/block_000/layer_000/SelfAttention/o', [512, 512]), ('encoder/block_000/layer_000/SelfAttention/o_slot_vc', [512]), ('encoder/block_000/layer_000/SelfAttention/o_slot_vr', [512]), ('encoder/block_000/layer_000/SelfAttention/q', [512, 512]), ('encoder/block_000/layer_000/SelfAttention/q_slot_vc', [512]), ('encoder/block_000/layer_000/SelfAttention/q_slot_vr', [512]), ('encoder/block_000/layer_000/SelfAttention/relative_attention_bias', [8, 32]), ('encoder/block_000/layer_000/SelfAttention/relative_attention_bias_slot_v', [8, 32]), ('encoder/block_000/layer_000/SelfAttention/v', [512, 512]), ('encoder/block_000/layer_000/SelfAttention/v_slot_vc', [512]), ('encoder/block_000/layer_000/SelfAttention/v_slot_vr', [512]), ('encoder/block_000/layer_000/layer_norm/scale', [512]), ('encoder/block_000/layer_000/layer_norm/scale_slot_v', [512]), ('encoder/block_000/layer_001/DenseReluDense/wi/kernel', [512, 2048]), ('encoder/block_000/layer_001/DenseReluDense/wi/kernel_slot_vc', [2048]), ('encoder/block_000/layer_001/DenseReluDense/wi/kernel_slot_vr', [512]), ('encoder/block_000/layer_001/DenseReluDense/wo/kernel', [2048, 512]), ('encoder/block_000/layer_001/DenseReluDense/wo/kernel_slot_vc', [2048]), ('encoder/block_000/layer_001/DenseReluDense/wo/kernel_slot_vr', [512]), ('encoder/block_000/layer_001/layer_norm/scale', [512]), ('encoder/block_000/layer_001/layer_norm/scale_slot_v', [512]), ... ... # Similar weights for all the other encoder blocks ... ('encoder/block_005/layer_001/layer_norm/scale_slot_v', [512]), ('encoder/final_layer_norm/scale', [512]), ('encoder/final_layer_norm/scale_slot_v', [512]), ('global_step', []), ('shared/embedding', [32128, 512]), ('shared/embedding_slot_vc', [32128]), ('shared/embedding_slot_vr', [512])]``` ``` Pytorch output: ``` ['shared.weight', 'encoder.block.0.layer.0.SelfAttention.q.weight', 'encoder.block.0.layer.0.SelfAttention.k.weight', 'encoder.block.0.layer.0.SelfAttention.v.weight', 'encoder.block.0.layer.0.SelfAttention.o.weight', 'encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight', 'encoder.block.0.layer.0.layer_norm.weight', 'encoder.block.0.layer.1.DenseReluDense.wi.weight', 'encoder.block.0.layer.1.DenseReluDense.wo.weight', 'encoder.block.0.layer.1.layer_norm.weight', 'encoder.block.1.layer.0.SelfAttention.q.weight', ... ... # Similar weights for all the other encoder blocks ... 'encoder.block.5.layer.1.layer_norm.weight', 'encoder.final_layer_norm.weight', 'decoder.block.0.layer.0.SelfAttention.q.weight', 'decoder.block.0.layer.0.SelfAttention.k.weight', 'decoder.block.0.layer.0.SelfAttention.v.weight', 'decoder.block.0.layer.0.SelfAttention.o.weight', 'decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight', 'decoder.block.0.layer.0.layer_norm.weight', 'decoder.block.0.layer.1.EncDecAttention.q.weight', 'decoder.block.0.layer.1.EncDecAttention.k.weight', 'decoder.block.0.layer.1.EncDecAttention.v.weight', 'decoder.block.0.layer.1.EncDecAttention.o.weight', 'decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight', -----> Where does this guy come from? am I missing something in the original weights? 'decoder.block.0.layer.1.layer_norm.weight', 'decoder.block.0.layer.2.DenseReluDense.wi.weight', 'decoder.block.0.layer.2.DenseReluDense.wo.weight', 'decoder.block.0.layer.2.layer_norm.weight', ... ... # Similar weights for all the other encoder blocks ... 'decoder.block.5.layer.2.layer_norm.weight', 'decoder.final_layer_norm.weight'] ``` Sorry, now that I think about it, I should have provided this information in the original post itself. So I was wondering where does the weight `decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight` come from cause I don't seem find it in the original tf weights file or am I missing something?<|||||>@thomwolf sorry for the push. Any update on this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@swapnull7 how did you solve this issue?<|||||>Hey @swapnull7 , it seems that that it was a mistake and T5 isn't supposed to have relative attention bias between encoder and decoder. It has been removed in the new version of transformers. I don't know where the pretrained weights for it came from 🤔 https://github.com/huggingface/transformers/issues/8933#issuecomment-739251827
transformers
2,333
closed
Add 'keep_accents' flag to basic tokenizer
Hello! Recently we released our Spanish Bert Model (https://github.com/dccuchile/beto) and we found problems with the tokenization for Spanish. The problem relates to that the basic tokenizer convert the text to NFD. For example: ``` text = "[CLS] compañera [SEP]" tokenized_text = tokenizer.tokenize(text) tokenized_text ['[CLS]', 'compa', '##ner', '##a', '[SEP]'] ``` It changes *ñ* to *n*. Another: ``` text = "[CLS] acción [SEP]" tokenized_text = tokenizer.tokenize(text) tokenized_text ['[CLS]', 'accion' ,'[SEP]'] ``` It changes *ó* to *o*. That behavior is not wanted for our Spanish model so in this PR I'm adding a flag to control that. Waiting for your comments, thank you!
12-26-2019 17:22:47
12-26-2019 17:22:47
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=h1) Report > Merging [#2333](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/77b0a385ffac5964030d08b1c3611b61370b1918?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2333/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2333 +/- ## ========================================== + Coverage 74.67% 74.67% +<.01% ========================================== Files 87 87 Lines 14800 14802 +2 ========================================== + Hits 11052 11054 +2 Misses 3748 3748 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.64% <ø> (ø)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `90.41% <ø> (ø)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.26% <ø> (ø)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `88.35% <ø> (ø)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `95.12% <ø> (ø)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.83% <ø> (ø)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.82% <ø> (ø)` | :arrow_up: | | [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.82% <ø> (ø)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `87.6% <100%> (+0.04%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=footer). Last update [77b0a38...386a104](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,332
closed
What does 'output of the embeddings' mean?
Hello, According to Hugging Face Transformers documentation, (https://huggingface.co/transformers/model_doc/gpt2.html#gpt2doubleheadsmodel) the transformer's output ```hidden_state``` is defined as the following: ``` hidden_states: (optional, returned when config.output_hidden_states=True) list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs. ``` I am a bit confused by the statement ```list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) ```. Does the ```output of the embeddings``` at the end of the statement refer to 'output of the uppermost output layer'? Thank you,
12-26-2019 16:53:34
12-26-2019 16:53:34
The output of the embeddings is the sum of the token embeddings + the segment embeddings + the position embeddings. This value is the value that will be fed to the first layer of the transformer.<|||||>@LysandreJik Hello, Thank you very much for your reply. So according to the Hugging Face Transformer documentation for the ```GPT2DoubleHeadsModel``` (under the 'output' section) ``` hidden_states: (optional, returned when config.output_hidden_states=True) list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) ``` So in this case, would the first ```hidden_states``` tensor (index of 0) that is returned be the output of the embeddings, or would the very last ```hidden_states``` tensor that is returned be the output of the embeddings? I am confused about the order in which the ```hidden_states``` tensors are returned, because the documentation seem to indicate that the output of the embeddings is the last ```hidden_state``` tensor that is returned. Thank you,<|||||>Indeed, the documentation might be misleading in that regard. The first value is the embedding output, every following value is the result of the preceding value being passed through an additional layer. I'll update the documentation shortly.<|||||>I remain confused by this and will be posting on the Disqus.
transformers
2,331
closed
Learning Rate is not being updated by the Scheduler
Hello, Outside of the training function, I set: ```python # define the hyperparameters for running the train function. optimizer_ch2 = AdamW(model_ch2.parameters(), lr = lr, correct_bias = True) scheduler_ch2 = get_linear_schedule_with_warmup(optimizer = optimizer_ch2, num_warmup_steps = 200, num_training_steps = 1000, last_epoch = -1) ``` and here is my train function: ```python def train_lm_head(model, train_iter, optimizer, scheduler, log_interval, pad_index): # turn on a training mode model.train() # initialize total_loss to 0 total_loss = 0 for batch_index, batch in enumerate(train_iter): input_ids = [instance for instance in batch.text] ## NOTE: Positions embeddings can be automatically created by the GPT2DoubleHeadsModel as (0, 1, ..., N) # set the gradient back to 0 (necessary step) optimizer.zero_grad() # notice here that we are only placing lm_labels # as mc_label is unnecessary for language modelling purpose. lm_labels = [-1] + input_ids[:(len(input_ids)-1)] lm_labels = torch.tensor([lm_labels], dtype=torch.long) input_ids = torch.tensor([input_ids], dtype=torch.long) output = model(input_ids, lm_labels = lm_labels) loss = output[0] # 'loss' here is the cross entropy. # recall: 'input_ids' is defined above. # calculate gradient by backwarding the loss # calculate gradient of the loss w.r.t weights loss.backward() # clips norm of the gradient of an iterable of parameters. # The norm is computed over all gradients together, as if they were # concatenated into a single vector. Gradients are modified in-place. # so basically just normalizes the weights and returns them. torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5) optimizer.step() # update the weights by following the WarmupLinearSchedule for the lr. scheduler.step() # update the learning rate # update the with the calculated loss total_loss = total_loss + loss # python format: 's' for string, 'd' to display decimal integers (10-base), and 'f' for floats. # ex: print("Sammy ate {0:.3f} percent of a pizza!".format(75.765367)) # >> Sammy ate 75.765 percent of a pizza! # print("Sammy ate {0:f} percent of a {1}!".format(75, "pizza")) # >> Sammy ate 75.000000 percent of a pizza! # # Below is good enough since we are doing the Stochastic Gradient Descent. # (i.e. 1 batch = 1 sample) if batch_index % log_interval == 0 and batch_index > 0: print('| epoch {:3d} | {:5d}/{:5d} batches | lr {:02.2f} |'.format( epoch, batch_index, len(train_iter), scheduler.get_lr()[0])) total_loss = 0 ``` and when I iterate the train function above for 5 epoch, I am getting the following output: ```{python} # ... | epoch 1 | 138/ 4957 batches | lr 0.00 | | epoch 1 | 139/ 4957 batches | lr 0.00 | | epoch 1 | 140/ 4957 batches | lr 0.00 | | epoch 1 | 141/ 4957 batches | lr 0.00 | | epoch 1 | 142/ 4957 batches | lr 0.00 | | epoch 1 | 143/ 4957 batches | lr 0.00 | | epoch 1 | 144/ 4957 batches | lr 0.00 | | epoch 1 | 145/ 4957 batches | lr 0.00 | | epoch 1 | 146/ 4957 batches | lr 0.00 | | epoch 1 | 147/ 4957 batches | lr 0.00 | | epoch 1 | 148/ 4957 batches | lr 0.00 | | epoch 1 | 149/ 4957 batches | lr 0.00 | | epoch 1 | 150/ 4957 batches | lr 0.00 | | epoch 1 | 151/ 4957 batches | lr 0.00 | | epoch 1 | 152/ 4957 batches | lr 0.00 | #... list goes on ``` I am a bit concerned about this output because the learning rate does not seem to be changing, although I have specified in my train function ```scheduler.step()```, right underneath the ```optimizer.step()```. What am I doing wrong here? Thank you,
12-26-2019 16:49:16
12-26-2019 16:49:16
.2f is not enough to represent learning rate<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,330
closed
BERT adapted to time series
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Is there a better way of modifying BERT to take time series as input (i.e. numerical data instead of text) than editing my local library to skip the word embedding? If not, what is the easiest way to do the latter? Thanks!
12-26-2019 16:43:13
12-26-2019 16:43:13
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Can this issue be opened again? I recon there is a need to discuss this possibility<|||||>@jbechara / @MJimitater : Hello! I happened to stumbled upon this issue earlier this week. We have a paper (with code and a new dataset), which is to appear in ICASSP '21, where we propose to model multivariate times series dataset through BERT and GPT2. Please give it a try to see if it serves your purpose! Paper: https://arxiv.org/abs/2011.01843 Code: https://github.com/IBM/TabFormer
transformers
2,329
closed
refactoring the code
code formatting, following PEP8 convention
12-26-2019 14:26:41
12-26-2019 14:26:41
We already have a `make style` command which automates formatting (with a setup that we chose). Thanks for your contribution, closing this issue now.
transformers
2,328
closed
Refactoring the code
Making the code formatting appropriate
12-26-2019 13:51:51
12-26-2019 13:51:51
transformers
2,327
closed
load_and_cache_examples crashes on windows
## 🐛 Bug <!-- Important information --> Model I am using ALBERT: Language I am using the model on (English): The problem arise when using: [examples/run_squad.py] the official example scripts: (run evaluation for offline model) It crashes in "load_and_cache_examples" for paths with windows format. It's due to split being done using ('/'). For windows, it needs to be ('\\\\') The tasks I am working on is: SQUaD ## To Reproduce Steps to reproduce the behavior: 1. Get one model cached into a local folder by running Evaluation for SQUAD 2.0 using run_squad.py. This will load online model to the system 2. Run evaluation again with model_name_or_path needs to be a local relative path with "..\\" in it. <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: Windows * Python version: Python 3.6 * PyTorch version: torch 1.3.1 * PyTorch Transformers version (or branch): Latest ## Additional context Not needed
12-26-2019 13:39:19
12-26-2019 13:39:19
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,326
closed
run_generation.py gives TypeError when using xlnet due to empty dict being passed as token
## 🐛 Bug When I run ``` python run_generation.py --model_type=xlnet --model_name_or_path=xlnet-large-cased ``` I get the following error ``` Traceback (most recent call last): File "run_generation.py", line 236, in <module> main() File "run_generation.py", line 214, in main encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt") File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 820, in encode **kwargs File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 912, in encode_plus first_ids = get_input_ids(text) File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 904, in get_input_ids return self.convert_tokens_to_ids(text) File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 751, in convert_tokens_to_ids ids.append(self._convert_token_to_id_with_added_voc(token)) File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 758, in _convert_token_to_id_with_added_voc if token in self.added_tokens_encoder: TypeError: unhashable type: 'dict' ``` I think the problem lies in the following code in run_generation.py: ``` def prepare_xlnet_input(args, _, tokenizer, prompt_text): prompt_text = (args.padding_text if args.padding_text else PADDING_TEXT) + prompt_text return prompt_text, {} ``` This returns a tuple of (string, dict). As this gets passed down to _convert_token_to_id_with_added_voc(), it will first try to check whether prompt_text is in self.added_tokens_encoder, and then whether {} is in self.added_tokens_encoder (which gives a TypeError, because you cannot check ```{} in list```. I'm not yet sure where the empty dict is supposed to be used, so I can't fix it myself. Would be happy to contribute though. ## Important information Model I am using (Bert, XLNet....): XLNet Language I am using the model on (English, Chinese....): English The problem arise when using: * [x] the official example scripts: run_generation.py * [ ] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: NA * [ ] my own task or dataset: NA ## To Reproduce Steps to reproduce the behavior: 1. cd examples 2. python run_generation.py --model_type=xlnet --model_name_or_path=xlnet-large-cased <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ``` Traceback (most recent call last): File "run_generation.py", line 236, in <module> main() File "run_generation.py", line 214, in main encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt") File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 820, in encode **kwargs File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 912, in encode_plus first_ids = get_input_ids(text) File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 904, in get_input_ids return self.convert_tokens_to_ids(text) File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 751, in convert_tokens_to_ids ids.append(self._convert_token_to_id_with_added_voc(token)) File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 758, in _convert_token_to_id_with_added_voc if token in self.added_tokens_encoder: TypeError: unhashable type: 'dict' ``` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> Running without an error. ## Environment * OS: Mac OS Mojave 10.14.4 * Python version: 3.7.3 * PyTorch version: 1.3.1 * PyTorch Transformers version (or branch): current master (8c67b529f615cc24c46864b8323d2d47a15ccd58) * Using GPU ? No * Distributed of parallel setup ? No * Any other relevant information: NA ## Additional context <!-- Add any other context about the problem here. --> NA
12-26-2019 13:01:22
12-26-2019 13:01:22
I have the same problem, did you find any solutions? @nanne-aben <|||||>No, not really. I removed the empty dictionary, which makes the code run, but the generated text is just kinda bad. GPT2 (in which case {} is not added) creates much better text. So I guess that the {} was added for a reason, but I can't figure out why. Also, when using https://transformer.huggingface.co/ XLNET seems to make good text, so it does seem that something is wrong with simply removing the {}. I'm not sure how to proceed though... Still interested in resolving this though, so please let me know if you find anything! Would be happy to contribute something here. On Wed, Feb 5, 2020 at 2:46 PM pooya khandel <[email protected]> wrote: > I have the same problem, did you find any solutions? > @nanne-aben <https://github.com/nanne-aben> > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/2326?email_source=notifications&email_token=ALOBCX4FHNIAMCNWJMQ3D4DRBK7KRA5CNFSM4J7LJOR2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEK3PAYY#issuecomment-582414435>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ALOBCXYOF27U4R54BY3WZILRBK7KRANCNFSM4J7LJORQ> > . > <|||||>Hi, thank you for opening this issue. I'm fixing this in #2749.<|||||>Awesome, thanks! On Wed, 5 Feb 2020 at 22:19, Lysandre Debut <[email protected]> wrote: > Hi, thank you for opening this issue. I'm fixing this in #2749 > <https://github.com/huggingface/transformers/pull/2749>. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/2326?email_source=notifications&email_token=ALOBCX3PHAP4E6EUC6VB2SLRBMUNDA5CNFSM4J7LJOR2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEK5ATRI#issuecomment-582617541>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ALOBCX4HMZSVJHAHMQAQUSTRBMUNDANCNFSM4J7LJORQ> > . > <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,325
closed
How to make FP16 quantization on gpt/xl?
How could I fix this error? `ValueError: Message tensorflow.GraphDef exceeds maximum protobuf size of 2GB: 6234365906`
12-26-2019 12:27:49
12-26-2019 12:27:49
Closing this in favor of https://github.com/huggingface/tflite-android-transformers/issues/4
transformers
2,324
closed
Typo in serving.py
12-26-2019 11:21:48
12-26-2019 11:21:48
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=h1) Report > Merging [#2324](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aeef4823ab6099249679756182700e6800024c36?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2324/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2324 +/- ## ======================================= Coverage 73.49% 73.49% ======================================= Files 87 87 Lines 14793 14793 ======================================= Hits 10872 10872 Misses 3921 3921 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/2324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0% <ø> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=footer). Last update [aeef482...7211541](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
2,323
closed
Where does the pre-trained bert model gets cached in my system by default?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I used model_class.from_pretrained('bert-base-uncased') to download and use the model. The next time when I use this command, it picks up the model from cache. But when I go into the cache, I see several files over 400M with large random names. How do I know which is the bert-base-uncased or distilbert-base-uncased model? Maybe I am looking at the wrong place
12-26-2019 09:31:29
12-26-2019 09:31:29
AFAIK, the cache folder is hidden. You can download the files manually and the save them to your desired location two files to download is config.json and <model--name>.bin and you can call it through pretrained suppose you wanted to instantiate BERT then do `BertForMaskedLM.from_pretrained(Users/<Your location>/<your folder name>)`<|||||>Each file in the cache comes with a .json file describing what's inside. _This isn't part of transformers' public API and may change at any time in the future._ Anyway, here's how you can locate a specific file: ``` $ cd ~/.cache/torch/transformers $ grep /bert-base-uncased *.json 26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084.json:{"etag": "\"64800d5d8528ce344256daf115d4965e\"", "url": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt"} 4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.bf3b9ea126d8c0001ee8a1e8b92229871d06d36d8808208cc2449280da87785c.json:{"etag": "\"74d4f96fdabdd865cbdbe905cd46c1f1\"", "url": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json"} d667df51ec24c20190f01fb4c20a21debc4c4fc12f7e2f5441ac0a99690e3ee9.4733ec82e81d40e9cf5fd04556267d8958fb150e9339390fc64206b7e5a79c83.h5.json:{"etag": "\"41a0e56472bad33498744818c8b1ef2c-64\"", "url": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-tf_model.h5"} ``` Here, `bert-base-uncased-tf_model.h5` is cached as `d667df51ec24c20190f01fb4c20a21debc4c4fc12f7e2f5441ac0a99690e3ee9.4733ec82e81d40e9cf5fd04556267d8958fb150e9339390fc64206b7e5a79c83.h5`.<|||||>The discussion in #2157 could be useful too.<|||||>Hi! What if I use colab then how can I find the cash file? @aaugustin <|||||>For anyone landed here wondering if one can globally change the cache directory: set `PYTORCH_TRANSFORMERS_CACHE` environment variable in shell before running the python interpreter.<|||||>You can get find it the same way transformers do it: from transformers.file_utils import hf_bucket_url, cached_path pretrained_model_name = 'DeepPavlov/rubert-base-cased' archive_file = hf_bucket_url( pretrained_model_name, filename='pytorch_model.bin', use_cdn=True, ) resolved_archive_file = cached_path(archive_file) <|||||>For me huggingface changed the default cache folder to: ``` ~/.cache/huggingface/transformers ```<|||||>> You can get find it the same way transformers do it: > > ``` > from transformers.file_utils import hf_bucket_url, cached_path > pretrained_model_name = 'DeepPavlov/rubert-base-cased' > archive_file = hf_bucket_url( > pretrained_model_name, > filename='pytorch_model.bin', > use_cdn=True, > ) > resolved_archive_file = cached_path(archive_file) > ``` Thank you, this worked for me! Note that I had to remove the `use_cdn` option. Additionally, it does not seem to tell you where the `vocab.txt` and other files are located
transformers
2,322
closed
I am getting repetitive output when running "python run_generation.py"
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Here is the command I used to run the code: python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2 --length 100 Here is the input and output I got: Model prompt >>> nice to meet you nice to meet you. "I'm sorry, but I'm not going to be able to meet you. I'm not going to be able to meet you. I'm not going to be able to meet you. I'm not going to be able to meet you. I'm not going to be able to meet you. I'm not going to be able to meet you. I'm not going to be able to meet you. I'm not going to be able to meet! I have tried different inputs but the output is always repeated.
12-26-2019 07:46:52
12-26-2019 07:46:52
I guess you can tune the model for better results like selecting medium large gpt model changin temp and top - p to get different predictions. If your new try using [write with transformer](https://transformer.huggingface.co/doc/gpt2-large) to get an idea about it.<|||||>You could add a `repetition_penalty`. Running python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2 --length=100 --repetition_penalty=1.2 would give: ``` Model prompt >>> nice to meet you nice to meet you. I'm sorry, but I don't know what's going on here." She said with a smile that made me feel like she was trying hard to be nice and not mean or anything… "You're just saying it because we've been together for so long…" Her voice sounded very serious as if someone had asked her about the past couple of days before they'd met up in person at all! ``` You would need a very up-to-date version of transformers to make sure that the PR #2303 is included in your code to be sure that the `repetition_penalty` is working correctly. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,321
closed
Bert Decoder using is_decoder and encoder_hidden_states
``` import torch from transformers import BertTokenizer, BertModel, BertForMaskedLM # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') text = "[CLS] For an unfamiliar eye, the Porsche Cayenne and the Cayenne Coupe would look similar" tokenized_text = tokenizer.tokenize(text) # Mask a token that we will try to predict back with `BertForMaskedLM` masked_index = 3 tokenized_text[masked_index] = '[MASK]' print(tokenized_text) # Convert token to vocabulary indices indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) tokens = tokenizer.convert_ids_to_tokens(indexed_tokens) string = tokenizer.convert_tokens_to_string(tokens) # # Define sentence A and B indices associated to 1st and 2nd sentences (see paper) segments_ids = [0 for x in range(len(tokenized_text))] # # # Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) # model = BertForMaskedLM.from_pretrained('bert-base-uncased', is_decoder=True) model.eval() # # # Predict all tokens with torch.no_grad(): outputs = model(tokens_tensor, token_type_ids=segments_tensors, tokens=tokenized_text, encoder_hidden_states=tokens_tensor) predictions = outputs[0] print('state_dict',len(model.state_dict())) predicted_indices = [] # # confirm we were able to predict 'henson' for i in range(len(tokenized_text)): predicted_indices.append(torch.argmax(predictions[0, i]).item()) # predicted_index = torch.argmax(predictions[0, masked_index]).item() predicted_token = tokenizer.convert_ids_to_tokens(predicted_indices)[0] print('indexed_tokens', indexed_tokens) print('predicted_indices', predicted_indices) predicted_text = tokenizer.decode(predicted_indices) print(predicted_text) ``` In `modeling_bert` it's mentioned ``` To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`; an `encoder_hidden_states` is expected as an input to the forward pass. ``` So i did the same in my code but i get 2 error saying `INFO:transformers.modeling_utils:Weights of BertForMaskedLM not initialized from pretrained model: ['bert.encoder.layer.0.crossattention.self.query.weight` and ``` File "/Volumes/Data/transformers-master/transformers/modeling_bert.py", line 679, in forward extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :] RuntimeError: expected device cpu and dtype Float but got device cpu and dtype Bool ``` Am i missing something or is this the wrong way to configure bert decoder? In General, i'd like to know how encoder-decoder transformer work in BERT
12-26-2019 06:53:34
12-26-2019 06:53:34
Hi, you're initializing a decoder but you're using it as an encoder. For the task you're showing here, you only need the encoder part, no need to initialize a decoder: ```py model = BertForMaskedLM.from_pretrained('bert-base-uncased') model.eval() # # # Predict all tokens with torch.no_grad(): outputs = model(tokens_tensor, token_type_ids=segments_tensors) predictions = outputs[0] ``` You can see an example of the Model2Model architecture (encoder-decoder) based on BERT in the [quickstart section of the documentation.](https://huggingface.co/transformers/quickstart.html#model2model-example)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi @LysandreJik , I intend to use Bert with a generative head. Can you give an example of using bert with is_decoder as True?
transformers
2,320
closed
how to do a simple multi-classifier by bert 2.0,training set ,and label set all lines
how to do a simple multi-classifier by bert 2.0,training set ,and label set all lines
12-26-2019 03:34:55
12-26-2019 03:34:55
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,319
closed
help: couldn't find such vocabulary files at this path or url
I want to load the Chinese Roberta model of pre-trained. When I use RobertaModel.from_pretrained() to load pre-trained model, it can't work. <img width="1108" alt="屏幕快照 2019-12-25 下午11 56 11" src="https://user-images.githubusercontent.com/25845940/71454755-21f98100-27cd-11ea-8d0d-37beed6cc235.png"> <img width="1058" alt="屏幕快照 2019-12-25 下午11 56 17" src="https://user-images.githubusercontent.com/25845940/71455892-02b12280-27d2-11ea-98d6-ac1b2bd45901.png"> The Chinese Roberta model is download from https://github.com/brightmart/roberta_zh I am not sure if it is a problem with the pre-trained model or the transformers framework problem
12-26-2019 03:21:04
12-26-2019 03:21:04
Did you manage to solve your issue? (if you did, how?)
transformers
2,318
closed
How can I read my bert model by using transformers?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> when I use pytorch_pretrained_bert, i can read my model like this: from pytorch_pretrained_bert import BertModel, BertTokenizer tokenizer = BertTokenizer.from_pretrained(bert_vocab_path) bert = BertModel.from_pretrained(bert_model_path) when i use transformers, how can i do this?
12-26-2019 01:54:30
12-26-2019 01:54:30
Not clearly sure what your question is but i guess u need to change `from pytorch_pretrained_bert import BertModel, BertTokenizer` to `from transformers import BertModel, BertTokenizer` Download latest version if not-found-module error occurs...<|||||>thank u for your replay, I tryed&nbsp;from transformers import BertModel, BertTokenizer, but it reminds me that&nbsp;OSError: Model name 'model/chinese_L-12_H-768_A-12' was not found in model name list, does it means that i can't use my local model? ------------------&nbsp;原始邮件&nbsp;------------------ 发件人:&nbsp;"shashankMadan-designEsthetics"<[email protected]&gt;; 发送时间:&nbsp;2019年12月26日(星期四) 晚上7:48 收件人:&nbsp;"huggingface/transformers"<[email protected]&gt;; 抄送:&nbsp;"晓辰"<[email protected]&gt;;"Author"<[email protected]&gt;; 主题:&nbsp;Re: [huggingface/transformers] How can I read my bert model by using transformers? (#2318) Not clearly sure what your question is but i guess u need to change from pytorch_pretrained_bert import BertModel, BertTokenizer to from transformers import BertModel, BertTokenizer Download latest version if not-found-module error occurs... — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.<|||||>Well You can and it should work. First Try using just `bert-uncased` to check if its working correctly or maybe with `BERT-Base, Chinese`. If it says not found it may be some error from your local url so check if its the right folder location and name. Then if nothing works i guess you may need to finetune it.<|||||>i tryed and it works, but so slow, may because i'm in china, HAHA, thank you very much ------------------&nbsp;原始邮件&nbsp;------------------ 发件人:&nbsp;"shashankMadan-designEsthetics"<[email protected]&gt;; 发送时间:&nbsp;2019年12月26日(星期四) 晚上8:17 收件人:&nbsp;"huggingface/transformers"<[email protected]&gt;; 抄送:&nbsp;"晓辰"<[email protected]&gt;;"Author"<[email protected]&gt;; 主题:&nbsp;Re: [huggingface/transformers] How can I read my bert model by using transformers? (#2318) Well You can and it should work. First Try using just bert-uncased to check if its working correctly or maybe with BERT-Base, Chinese. If it says not found it may be some error from your local url so check if its the right folder location and name. Then if nothing works i guess you may need to finetune it. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.<|||||>Your welcome, Utilize cuda if u have gpus or try doing it on cloud. Do close the issue if it seems solved.
transformers
2,317
closed
Fix beam search when sampling in language generation
I think there is a problem with beam search when setting `do_sample=True` As it was implemented before, the variable `next_words` in previous line 829 would always contains word ids < `vocab_size` which forces all `beam_idx` to always be == 0. This way all words would actually always be appended to the `input_ids` of the first beam. In the proposed PR, the words are sampled over the scores of size `(batch_size, num_beams * vocab_size)`, which is similar to what is done in greedy decoding. I tried generating a couple of sequences with the proposed change and it seems to be important that the temperature is set relatively high (~1.5) to avoid repeating words. Not 100% whether the proposed PR is the best fix. In general, beam search seems to work better when doing greedy decoding.
12-25-2019 23:21:18
12-25-2019 23:21:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=h1) Report > Merging [#2317](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aeef4823ab6099249679756182700e6800024c36?src=pr&el=desc) will **not change** coverage. > The diff coverage is `0%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2317/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2317 +/- ## ======================================= Coverage 73.49% 73.49% ======================================= Files 87 87 Lines 14793 14793 ======================================= Hits 10872 10872 Misses 3921 3921 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2317/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `63.45% <0%> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=footer). Last update [aeef482...af1ca72](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Good point but we shouldn't be sampling like that indeed, we should be sampling independently for each beam.
transformers
2,316
closed
Delete [dev] behind pip install -e .
I might be wrong here, but I think it should simply be ```bash $ pip install -e . ``` without the [dev] When executing ```bash $ pip install -e .[dev] ``` in my terminal I get the error: `no matches found: .[dev]`
12-25-2019 22:54:33
12-25-2019 22:54:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2316?src=pr&el=h1) Report > Merging [#2316](https://codecov.io/gh/huggingface/transformers/pull/2316?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aeef4823ab6099249679756182700e6800024c36?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2316/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2316?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2316 +/- ## ======================================= Coverage 73.49% 73.49% ======================================= Files 87 87 Lines 14793 14793 ======================================= Hits 10872 10872 Misses 3921 3921 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2316?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2316?src=pr&el=footer). Last update [aeef482...73511e8](https://codecov.io/gh/huggingface/transformers/pull/2316?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>`[dev]` is there to install the development dependencies. What shell are you using? Does `pip install -e ".[dev]"` or `pip install -e .\[dev\]` work? <|||||>We're probably going to modify the syntax for shell scripts. When we do this, we should modify it throughout the repository, because there are a bunch of other instances of this.<|||||>I see! I was using the zsh shell. `pip install -e ".[dev]"` and `pip install -e .\[dev\]` both work with zsh shell. When switching to the bash shell `pip install -e .[dev]` works as well.<|||||>I'm using zsh as well, but I must have enabled an option that makes the unquoted syntax work. I'm going to fix the instructions to prevent others from hitting the same problem.<|||||>Thanks for the report!
transformers
2,315
closed
Add hint to install pytest-xdist
Just a small hint that pytest-xdist should be installed before running the make test step
12-25-2019 22:50:31
12-25-2019 22:50:31
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2315?src=pr&el=h1) Report > Merging [#2315](https://codecov.io/gh/huggingface/transformers/pull/2315?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aeef4823ab6099249679756182700e6800024c36?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2315/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2315?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2315 +/- ## ======================================= Coverage 73.49% 73.49% ======================================= Files 87 87 Lines 14793 14793 ======================================= Hits 10872 10872 Misses 3921 3921 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2315?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2315?src=pr&el=footer). Last update [aeef482...4c48701](https://codecov.io/gh/huggingface/transformers/pull/2315?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>They're installed by `pip install -e .[dev]`. You don't have them because you modified that step. Let's discuss on #2316.
transformers
2,314
closed
Is there a uncased gpt2?
## ❓ Questions & Help Hi, thanks for everything. Quick question: Is there a pre-trained uncased gpt2, like bert-uncased?
12-25-2019 21:42:18
12-25-2019 21:42:18
Hi, all the available models are listed in the [pretrained models section of the documentation](https://huggingface.co/transformers/pretrained_models.html). For GPT-2, there are four different models (`gpt2`, `gpt2-medium`, `gpt2-large`, `gpt2-xl`), which are all cased.
transformers
2,313
closed
Add dropout to WordpieceTokenizer and BPE
We can add dropout not only to model weights but and to a tokenizer. The paper, proposed by Ivan Provilkov (2019, https://arxiv.org/pdf/1910.13267.pdf), describes all benefits from this approach and shows that it's almost always better to use dropout during tokenization. (use only for training, for inference dropout should be equal to 0) Example: ``` import transformers tokenizer = transformers.RobertaTokenizer.from_pretrained("roberta-base") tokenizer.tokenize("Dropout is very important") # default value is 0 # ['Drop', 'out', 'Ġis', 'Ġvery', 'Ġimportant'] tokenizer.tokenize("Dropout is very important", dropout=0.1) # ['Drop', 'out', 'Ġis', 'Ġvery', 'Ġimport', 'ant'] ```
12-25-2019 17:33:46
12-25-2019 17:33:46
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=h1) Report > Merging [#2313](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81db12c3ba0c2067f43c4a63edf5e45f54161042?src=pr&el=desc) will **decrease** coverage by `<.01%`. > The diff coverage is `84%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2313/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2313 +/- ## ========================================== - Coverage 73.54% 73.53% -0.01% ========================================== Files 87 87 Lines 14789 14796 +7 ========================================== + Hits 10876 10880 +4 - Misses 3913 3916 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.58% <100%> (ø)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `95.13% <66.66%> (-1.22%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=footer). Last update [81db12c...7472a38](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi i notice one possible issue in your code. you use `random.random() > dropout`. However, according to `Docstring: random() -> x in the interval [0, 1).` So even with droput=0, the bpe output is not deterministic.<|||||>Also, according to orig paper, dropout=1 should output char sequence. but following unchanged code snippet will make it output the raw input ```python3 if not pairs: return token ```<|||||>and `self.cache` should be updated iff droput=0<|||||>> Hi i notice one possible issue in your code. > you use `random.random() > dropout`. > However, according to `Docstring: random() -> x in the interval [0, 1).` > So even with droput=0, the bpe output is not deterministic. thanks, fixed<|||||>> Also, according to orig paper, dropout=1 should output char sequence. > but following unchanged code snippet will make it output the raw input > > ```python > if not pairs: > return token > ``` fixed this too, thanks for pointing<|||||>> and `self.cache` should be updated iff droput=0 also fixed, thanks<|||||>> > Also, according to orig paper, dropout=1 should output char sequence. > > but following unchanged code snippet will make it output the raw input > > ```python > > if not pairs: > > return token > > ``` > > fixed this too, thanks for pointing This issue is not really fixed, the exact corner case is all merges are dropped at the beginning, not limited to dropout=1. I think the correct fix is replace ```python if dropout != 1: pairs = [pair for pair in get_pairs(word) if random.random() >= dropout and pair in self.bpe_ranks] else: # we should merge space byte with first token char new_word = [] token_index = 0 while token_index < len(token): if token[token_index] != self.byte_encoder[32]: new_word.append(token[token_index]) token_index += 1 else: new_word.append(token[token_index : token_index + 2]) token_index += 2 return " ".join(new_word) if not pairs: return token while True: ``` with ```python pairs = [pair for pair in get_pairs(word) if random.random() >= dropout and pair in self.bpe_ranks] while pairs: ```<|||||>> > > Also, according to orig paper, dropout=1 should output char sequence. > > > but following unchanged code snippet will make it output the raw input > > > ```python > > > if not pairs: > > > return token > > > ``` > > > > > > fixed this too, thanks for pointing > > This issue is not really fixed, the exact corner case is all merges are dropped at the beginning, not limited to dropout=1. > I think the correct fix is replace > > ```python > if dropout != 1: > pairs = [pair for pair in get_pairs(word) if random.random() >= dropout and pair in self.bpe_ranks] > else: > # we should merge space byte with first token char > new_word = [] > token_index = 0 > while token_index < len(token): > if token[token_index] != self.byte_encoder[32]: > new_word.append(token[token_index]) > token_index += 1 > else: > new_word.append(token[token_index : token_index + 2]) > token_index += 2 > > return " ".join(new_word) > > > if not pairs: > return token > > > while True: > ``` > > with > > ```python > pairs = [pair for pair in get_pairs(word) if random.random() >= dropout and pair in self.bpe_ranks] > > while pairs: > ``` understood your point. simplified code, thanks<|||||>I have one advice that replace the usage of `random.random() >= dropout` with `dropout == 0 or dropout < 1 and dropout <= random.random()`, utilizing the short-circuit operator to prevent consuming unnecessary random number. Otherwise, this side effects may cause existing result rely on `random` unrepeatable.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@vitaliyradchenko good idea to add this feature. are you planning to add the suggestion of @boy2000-007man https://github.com/huggingface/transformers/pull/2313#issuecomment-573192073 and re-fresh this PR so that it will finally get merged?
transformers
2,312
closed
Correct tokenization for special and added tokens
When a tokenizer is being loaded with `PreTrainedTokenizer._from_pretrained`, it should set `added_tokens` and `all_special_tokens` to `unique_added_tokens_encoder`. If we don't do it, it will corrupt the tokenization. Example: ``` import transformers tokenizer = transformers.BertTokenizer.from_pretrained("bert-base-uncased") tokenizer.tokenize("[CLS] token should not be splitted.") # correct output # ['[CLS]', 'token', 'should', 'not', 'be', 'split', '##ted', '.'] # incorrect output # ['[', '[UNK]', ']', 'token', 'should', 'not', 'be', 'split', '##ted', '.'] ```
12-25-2019 16:36:29
12-25-2019 16:36:29
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=h1) Report > Merging [#2312](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cea04a244351a7c5bce44e1cfc01abde0ceb60fd?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2312/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2312 +/- ## ========================================== + Coverage 73.54% 73.54% +<.01% ========================================== Files 87 87 Lines 14789 14791 +2 ========================================== + Hits 10876 10878 +2 Misses 3913 3913 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `92.11% <100%> (+0.03%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `63.91% <0%> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=footer). Last update [cea04a2...b262577](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This is awesome, thanks a lot @vitaliyradchenko
transformers
2,311
closed
Can I use BERT / gpt-2 for text generation
## ❓ Questions & Help I want to get a list of possible completions and their probabilities. For example, For the sentence "I put the glass of the _" I want to get a vector with word and probabilities from a pre-trained model, such as : desk = 0.1 table = 0.2 car = 0.05 shirt = 0.001 Is that possible?
12-25-2019 13:31:38
12-25-2019 13:31:38
You could do something like this when using gpt2 ``` from transformers import GPT2LMHeadModel, GPT2Tokenizer from torch.nn import functional as F import torch model = GPT2LMHeadModel.from_pretrained('gpt2-medium') tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') # encode input context input_ids = torch.tensor(tokenizer.encode('I put the glass of the')).unsqueeze(0) # get logits of last predicted token next_word_logits = model(input_ids)[0][0, -1].detach() next_word_probs = F.softmax(next_word_logits, dim=0) next_words = ['desk', 'table', 'car', 'shirt'] next_words_probs = [] # encode tokens for which prob is to be estimated next_word_ids = [tokenizer.encode(next_word) for next_word in next_words] for next_word_id in next_word_ids: next_word_input_ids = input_ids.clone() next_word_prob = next_word_probs[next_word_id[0]].item() # We need a while loop here because a single word can be composed of multiple tokens # 'desk' is encoded to 2 tokens so that we have to call the model another time while(len(next_word_id) > 1): next_word_input_ids = torch.cat((next_word_input_ids, torch.tensor([next_word_id[0]]).unsqueeze(0)), dim=1) # get logits of last predicted token next_word_logits = model(next_word_input_ids)[0][0, -1].detach() # multiply prob of next token to prob of previous tokens next_word_prob *= F.softmax(next_word_logits, dim=0)[next_word_id[1]].item() # remove first token since already used next_word_id = next_word_id[1:] next_words_probs.append(next_word_prob) # print result for next_word, next_word_prob in zip(next_words, next_words_probs): print('{} = {}'.format(next_word, next_word_prob)) ``` <|||||>Yes it is possible u need to take the topk of lm_logits (it will be output[0] in case of gpt)which essentially gives to 50257 probabilities (highest to lowest) which is the vocab size then you need to take top k which gives indices and values, values are nothing but urs scores(0.8, 0.1) and the indices which correspond to the 50257 vocabulary words which u can decode using tokenize decode.<|||||>@patrickvonplaten Amazing thanks! And if I want the rank of these words from all the word in the vocab? e.g. desk is the most probable word , table in #12 , etc. ?<|||||>Since GPT-2's output is based on byte-pair-encoding tokens and not on words you would have to define your own vocabulary. Having defined your vocabulary, I would simply calculate the probability for each word using the above procedure and then sort the tensor. To better understand how byte-pair-encoding works [this](https://leimao.github.io/blog/Byte-Pair-Encoding/) might help. To sort the tensor [this](https://stackoverflow.com/questions/56176439/pytorch-argsort-ordered-with-duplicate-elements-in-the-tensor) might help.<|||||>@patrickvonplaten Thanks, you think it will be possible to do it for all (or at least most) of the words in English in my personal MAC?<|||||>Yeah, I think that should definitely be feasible. Many words will consists of two tokens or less and will therefore need at most one additional forward pass (because the first forward pass is the same for all words and need to be calculated only once). So if you have a vocabulary of say 300.000 words, I'd estimate that you would have to compute around 200.000 forward passes. You can calculate how much time a forward pass would take by averaging the computation time for 100 times calculating the probability for the word 'desk'. Concerning memory, there should not be a problem.<|||||>And the final vector giving the probabilities over your defined vocabulary should be normalized to make a prob distribution.<|||||>@patrickvonplaten You mean using softmax?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I was thinking to just normalize like this: https://stackoverflow.com/questions/26785354/normalizing-a-list-of-numbers-in-python but you could also use softmax again - depends on what you want and what works better for you! <|||||>@patrickvonplaten is it possible with BERT pre-trained model? Thanks!<|||||>You might take a look at masked language modeling :-) https://huggingface.co/transformers/usage.html#masked-language-modeling<|||||>@patrickvonplaten Nice! Thanks for the pointer! And let's say I want to check a specific word in a masked location (What is the probability of the word "`package` " in the sequence "`HuggingFace is creating a { } that the community uses to`"? Is this possible?
transformers
2,310
closed
revert erroneous fix #2276
I based #2276 on having an error pop up on an older pytorch version, and also on the erroneous (current!) documentation for pytorch.Tensor.scatter(): > `scatter(dim, index, source)` → Tensor > > Out-of-place version of torch.Tensor.scatter_() > > `scatter_(dim, index, src)` → Tensor > ... The argument was called `source`, inconsistently, in the version I was using, but somewhere along the way it went back to being `src` without the docs changing, which caused this confusion...
12-25-2019 06:30:18
12-25-2019 06:30:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=h1) Report > Merging [#2310](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81db12c3ba0c2067f43c4a63edf5e45f54161042?src=pr&el=desc) will **not change** coverage. > The diff coverage is `0%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2310/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2310 +/- ## ======================================= Coverage 73.54% 73.54% ======================================= Files 87 87 Lines 14789 14789 ======================================= Hits 10876 10876 Misses 3913 3913 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `63.91% <0%> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=footer). Last update [81db12c...e1844d9](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi, probably best to just use positional arguments here (instead of keywords) then, don't you think?<|||||>Great, thanks!
transformers
2,309
closed
Bug: Tokenization of Special Tokens
## 🐛 Bug The commit https://github.com/huggingface/transformers/commit/deceb001616995199a6a5dca866ffec95c3ebe74 introduces a bug in the tokenization of special tokens when using `from_pretrained` to initialize the tokenizer. ``` from transformers import AutoTokenizer bert_tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') bert_text = "[CLS] An example for [MASK] change. [SEP]" bert_tokenizer.tokenize(bert_text) # Before: ['[CLS]', 'an', 'example', 'for', '[MASK]', 'change', '.', '[SEP]'] # After: ['[', '[UNK]', ']', 'an', 'example', 'for', '[', '[UNK]', ']', 'change', '.', '[', '[UNK]', ']'] roberta_tokenizer = AutoTokenizer.from_pretrained('roberta-base') roberta_text = "<s> An example for <mask> change. </s>" roberta_tokenizer.tokenize(roberta_text) # Before: ['<s>', 'An', 'Ġexample', 'Ġfor', '<mask>', 'change', '.', '</s>'] # After: ['<', 's', '>', 'ĠAn', 'Ġexample', 'Ġfor', 'Ġ<', 'mask', '>', 'Ġchange', '.', 'Ġ</', 's', '>'] ```` Fixed by https://github.com/huggingface/transformers/pull/2312.
12-25-2019 04:47:16
12-25-2019 04:47:16
transformers
2,308
closed
pytorch_pretrained_bert giving different scores for BertForNextSentencePrediction
## ❓ Questions & Help from pytorch_transformers.modeling_bert import BertForNextSentencePrediction from pytorch_transformers import BertTokenizer, BertConfig import torch #Load pretrained model from local config = BertConfig.from_json_file('resources/bert_config.json') token = BertTokenizer('resources/vocab.txt') model = BertForNextSentencePrediction.from_pretrained('resources/pytorch_model.bin', config=config) model.eval() textA_ids = token.tokenize("How old are you?") textB_ids = token.tokenize("The Eiffel Tower is in Paris") text_ids = token.convert_tokens_to_ids(["[CLS]"] + textA_ids + ["[SEP]"] + textB_ids + ["[SEP]"]) segments_ids = [0]\*(len(textA_ids)+2) + [1]\*(len(textB_ids)+1) text_inputs = torch.tensor([text_ids]) segments_inputs = torch.tensor([segments_ids]) with torch.no_grad(): outputs = model(text_inputs, token_type_ids=segments_inputs) print(outputs) Outputs changed everytime when I ran the code. I followed many ways in other issues to solve the problem, but they didin't work. And I've used local pretrained model for many other tasks, but never happened this thing before. So I don't think model caused this problem. Version Information: torch 1.1.0 pytorch-transformers 1.2.0
12-25-2019 03:12:11
12-25-2019 03:12:11
transformers
2,307
closed
What's the exact name of BERT large in results ( GermEval 2014)?
## ❓ Questions & Help I use BERT large cased model downloaded by run_ner.py script. But I can't get the result in the table below. <!-- A clear and concise description of the question. --> ![image](https://user-images.githubusercontent.com/13817269/71427541-cb207880-26f4-11ea-9b15-5f6f13b6f31f.png)
12-25-2019 01:10:20
12-25-2019 01:10:20
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,306
closed
Non-Deterministic Behavior in BertTokenizer
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): BertTokenizer only Language I am using the model on (English, Chinese....): The problem arise when using: * [ ] the official example scripts: (give details) * [X] my own modified scripts: Jupyter Notebook The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details) tokenize a string ## To Reproduce Jupyter Notebook with one cell, with a cloned version of the transformers repo. ``` import sys sys.path.insert(0, 'transformers') from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', sep_token='[SEP]', do_lower_case=True) tokenizer.tokenize("[PAD] [SEP] [SEP] [PAD]") ``` with outputs (varying between kernel restarts and runs): `['[PAD]', '[', 'sep', ']', '[', 'sep', ']', '[PAD]']` `['[PAD]', '[SEP]', '[SEP]','[PAD]']` `['[PAD]', '[SEP]', '[SEP]', '[', 'pad', ']']` ## Expected behavior Expected the output to be `['[PAD]', '[SEP]', '[SEP]','[PAD]']` and have deterministic behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment Platform Linux-3.10.0-1062.el7.x86_64-x86_64-with-redhat-7.7-Verona Python 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0] PyTorch 1.3.1 Tensorflow 1.15.0 * PyTorch Transformers version (or branch): on master branch of repo * Using GPU ? Yes ## Additional context This happens sometimes when the notebook kernel is restarted and the cell is re-run. I haven't observed this happening when running a python script. <!-- Add any other context about the problem here. -->
12-25-2019 00:45:25
12-25-2019 00:45:25
As an update I've been able to reproduce this problem using a python script<|||||>I cannot reproduce this on Windows, PT1.4, latest transformers master.<|||||>I think I was using an old version of transformers :( This seems to have been fixed in v2.2.2 - After upgrading to latest I haven't observed this anymore so I'll close this issue. Thanks!
transformers
2,305
closed
[CLS] token / is used as the aggregate sequence representation for classification tasks
[CLS] is fed into an output layer for classification. How will be built this token. Is there something special done for it during training? in which way does happen this aggregations of sequences? Thanks for response
12-25-2019 00:27:26
12-25-2019 00:27:26
[CLS], [SEP] are "special token" of bert. Its included in the vocabulary of size 30522. Its starts with like 101, 103 something.<|||||>of course these two tokens are special tokens. **is used as the aggregate sequence representation for classification tasks.** in which way does happen this aggregations? >>The first token of every sequence is always a special classification token ([CLS]). The final hidden state corresponding to this token is used as the aggregate sequence representation for classification tasks. from papert https://arxiv.org/pdf/1810.04805.pdf<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,304
closed
Why are you getting just the last encoder states in the summarization code?
The line is here: https://github.com/huggingface/transformers/blob/v2.3.0/examples/summarization/modeling_bertabs.py#L142 By changing the line to `encoder_hidden_states = encoder_output` I was able to fine-tune the model successfully, as well as, run the inference code from the `run_summarization.py` script. So just wondering why you're indexing into the the encoder output rather than passing all of it along to the decoder?
12-24-2019 22:25:13
12-24-2019 22:25:13
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,303
closed
fix repetition penalty error in modeling_utils.py
fix bug mention in #2302
12-24-2019 16:19:18
12-24-2019 16:19:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=h1) Report > Merging [#2303](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81db12c3ba0c2067f43c4a63edf5e45f54161042?src=pr&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `0%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2303/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2303 +/- ## ========================================== - Coverage 73.54% 73.52% -0.02% ========================================== Files 87 87 Lines 14789 14793 +4 ========================================== Hits 10876 10876 - Misses 3913 3917 +4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2303/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `63.45% <0%> (-0.46%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=footer). Last update [81db12c...18e5bdb](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Good catch. But this is actually the technique mentioned in http://arxiv.org/abs/1909.05858. So to fix it we should check the code of Nitish (https://github.com/salesforce/ctrl) and apply the same behavior here.<|||||>I checked the code in https://github.com/salesforce/ctrl/blob/0f30306a8947ce0ede62e79c7e1f05a585cc56c9/generation.py#L217: `prompt_logits[_token][generated_token] /= penalty` So in the original code division is always used no matter what sign the `prompt_logit` of the previously generated tokens. When going a bit deeper and looking at the actual values of the logit in https://github.com/huggingface/transformers/blob/81db12c3ba0c2067f43c4a63edf5e45f54161042/src/transformers/modeling_utils.py#L731 for different models the following can be observed: For the models: **ctrl**, **xlm** the logit values tend to be positive, which explains why division by the `repetition penalty` is used. BUT, the values don't have to be positive, there were also very rare cases when using **ctrl** where the logit was actually negative in which case a division increases the probability of that word to be sampled. For the models: **gpt2**, **openai-gpt**, **xlnet** the logit values tend to be negative, in which case dividing by a `repetition penalty` increases the probability of previously generated tokens to be sampled. In the proposed PR, both cases would be correctly handled from a logical point of view. If we want to stick to the original code on the other hand (only using division) we could add a warning that the `repetition penalty` should only be used in combination with **ctrl**.<|||||>Ok, I see, thanks for documenting this. Let's go for this solution for now.<|||||>Is this fix added to the pip package? So if we use pip install package this will be covered or not yet I have to install from source? <|||||>Reading this after it was mentioned in the PPLM example PR. The fix makes total sense, but I have a concern: the amount by which a negative number is diminished is greater than the amount a positive number is diminished. If we have two values, say -2 and 2 this happens: ``` x = np.array([-2, 2]) sx = np.exp(x)/sum(np.exp(x)) print(sx) # array([0.01798621, 0.98201379]) ``` if we apply the same penalty to both, we would want the probabilities to stay the same, but this is what happens: ``` p = [1/1.2, 1.2] spx = np.exp(x/p)/sum(np.exp(x/p)) print(spx) # array([0.01684577, 0.98315423]) ``` On the other hand, if we apply the penalty to the probabilities after the softmax (and we renormalize) this is what happens: ``` p2 = [1.2, 1.2] sp2x = (sx/p2)/sum(sx/p2) print(sp2x) # array([0.01798621, 0.98201379]) ``` The probabilities are intact, as we want, because we don't want to penalize negative values more than we penalize positive values. So my proposal is to perform the penalty after the softmax, on probability values, always dividing, rather than on the logits. What do you think? Edit: In math i propose to move from: ![CodeCogsEqn](https://user-images.githubusercontent.com/349256/72381454-452c9780-36cc-11ea-931c-c08239a3042b.gif) to: ![CodeCogsEqn (1)](https://user-images.githubusercontent.com/349256/72381469-4a89e200-36cc-11ea-9eac-990f317e121a.gif) <|||||>Sorry for the late response @w4nderlust ! I think you it makes a lot of sense what you are saying! To implement your solution with minimal code change one could simply change Eq. (1): ![CodeCogsEqn (10)](https://user-images.githubusercontent.com/23423619/74723019-2d4fb280-523a-11ea-8dcb-f5d9cda3d176.gif) to the equivalent Eq. (2) ![CodeCogsEqn (11)](https://user-images.githubusercontent.com/23423619/74723112-52dcbc00-523a-11ea-841c-58a6c7347da0.gif) One question that remains is how the new repetition penalties ![CodeCogsEqn (12)](https://user-images.githubusercontent.com/23423619/74723216-828bc400-523a-11ea-8042-05f641abe599.gif) in Eq. (1) & (2) will have to differ from the old repetition penalties ![CodeCogsEqn (13)](https://user-images.githubusercontent.com/23423619/74723282-9cc5a200-523a-11ea-8f73-285fef6c799a.gif) in Eq. (3): ![CodeCogsEqn (8)](https://user-images.githubusercontent.com/23423619/74722928-042f2200-523a-11ea-909d-3eec8464e92c.gif) to have a similar effect on the softmax. It is quite obvious that ![CodeCogsEqn (13)](https://user-images.githubusercontent.com/23423619/74723282-9cc5a200-523a-11ea-8f73-285fef6c799a.gif) reduces the prob of its token much more than ![CodeCogsEqn (12)](https://user-images.githubusercontent.com/23423619/74723216-828bc400-523a-11ea-8042-05f641abe599.gif) For the different LMHead models, I calculated ![CodeCogsEqn (14)](https://user-images.githubusercontent.com/23423619/74723633-32f9c800-523b-11ea-9a81-46f2412aca93.gif) for different values of ![CodeCogsEqn (15)](https://user-images.githubusercontent.com/23423619/74723656-3f7e2080-523b-11ea-8234-34b27909f1c9.gif) . I simply generated randomly sampled sentences from the pretrained models and averaged the effect of the tokens for 5 runs with `max_length=100` so that the averaged is formed of ca. ![CodeCogsEqn (18)](https://user-images.githubusercontent.com/23423619/74723915-b9160e80-523b-11ea-8b2e-1fd67b1aad69.gif) tokens. The following values show by how much ![CodeCogsEqn (13)](https://user-images.githubusercontent.com/23423619/74723282-9cc5a200-523a-11ea-8f73-285fef6c799a.gif) scales down the prob after the softmax which is equivalent of what ![CodeCogsEqn (14)](https://user-images.githubusercontent.com/23423619/74723633-32f9c800-523b-11ea-9a81-46f2412aca93.gif) would have been set to: ``` Generate repetition penalty comparison for ctrl Penalty factor: 1.1 - Without penalty / penalty ratio avg: 4e0 Penalty factor: 1.2 - Without penalty / penalty ratio avg: 31e0 Penalty factor: 1.3 - Without penalty / penalty ratio avg: 149e0 Penalty factor: 1.4 - Without penalty / penalty ratio avg: 25e3 Penalty factor: 1.5 - Without penalty / penalty ratio avg: 286e3 Generate repetition penalty comparison for distilgpt2 Penalty factor: 1.1 - Without penalty / penalty ratio avg: 23e3 Penalty factor: 1.2 - Without penalty / penalty ratio avg: 2e9 Penalty factor: 1.3 - Without penalty / penalty ratio avg: 223e9 Penalty factor: 1.4 - Without penalty / penalty ratio avg: 3e24 Generate repetition penalty comparison for gpt2 Penalty factor: 1.1 - Without penalty / penalty ratio avg: 1e9 Penalty factor: 1.2 - Without penalty / penalty ratio avg: 742e18 Generate repetition penalty comparison for xlm-clm-enfr-1024 Penalty factor: 1.1 - Without penalty / penalty ratio avg: 2e0 Penalty factor: 1.2 - Without penalty / penalty ratio avg: 3e0 Penalty factor: 1.3 - Without penalty / penalty ratio avg: 5e0 Penalty factor: 1.4 - Without penalty / penalty ratio avg: 9e0 Penalty factor: 1.5 - Without penalty / penalty ratio avg: 13e0 Generate repetition penalty comparison for openai-gpt Penalty factor: 1.1 - Without penalty / penalty ratio avg: 1e0 Penalty factor: 1.2 - Without penalty / penalty ratio avg: 2e0 Penalty factor: 1.3 - Without penalty / penalty ratio avg: 4e0 Penalty factor: 1.4 - Without penalty / penalty ratio avg: 15e0 Penalty factor: 1.5 - Without penalty / penalty ratio avg: 19e0 Generate repetition penalty comparison for xlnet-base-cased Penalty factor: 1.1 - Without penalty / penalty ratio avg: 5e0 Penalty factor: 1.2 - Without penalty / penalty ratio avg: 34e0 Penalty factor: 1.3 - Without penalty / penalty ratio avg: 2e3 Penalty factor: 1.4 - Without penalty / penalty ratio avg: 47e3 Penalty factor: 1.5 - Without penalty / penalty ratio avg: 8e6 ``` It can be seen that `gpt2` for example produces much larger logit values which lead to much more drastic reductions in the prob after softmax. The repetition penalty was originally introduced for `ctrl` so it's probably best to look at its behaviour. <|||||>So I think there are three possibilities: 1) Follow the proposed solution from @w4nderlust implementing Eq.(1). This would mean though that the proposed repetition penalty of 1.3 in the ctrl paper would have to be changed to something around 150 which is quite a large value. 2) Instead of using substracting by the log(rep_penalty) as in: ![CodeCogsEqn (11)](https://user-images.githubusercontent.com/23423619/74723112-52dcbc00-523a-11ea-841c-58a6c7347da0.gif), one could only substract by the rep_penalty to give the equation: ![CodeCogsEqn (21)](https://user-images.githubusercontent.com/23423619/74725614-6e49c600-523e-11ea-9ad3-14e0432ecec8.gif), This way the values for ![CodeCogsEqn (22)](https://user-images.githubusercontent.com/23423619/74725706-8d485800-523e-11ea-80b2-fb45048d2e6f.gif) would equal ![CodeCogsEqn (24)](https://user-images.githubusercontent.com/23423619/74725769-a4874580-523e-11ea-9673-8e4dc72a483d.gif) and thus be much smaller. The repetition penalty in `ctlr` would thus only have to be around 5 to equal the behavior of the old penalty of 1.3. One disadvantage would be that the neutral element in this case is 0 instead of 1 which might be a bit confusing. 3) Just leave as it is now since from what I seen most logits almost always all either positive or either all negative, so that the current behavior is not very prone to lead to errors. I would tend to solution 2, giving a clear explanation of the variable in the argument section of the language generation function. What do you think @w4nderlust and @thomwolf ? <|||||>Thank you for the thorough analysis @patrickvonplaten ! I believe 2 would be fine. The nog just scales things differently, but there's no specific reason to have it, as it is a user tunable parameter anyway. The fact that the default would be 0 instead of one I think could be explained and one could point to this conversation in a comment to give the full picture. Although I understand this is not a huge issue (because of what you say in 3), I kinda believe 2 is better as the could potentially be in the future a different model that actually outputs both positive and negative logits and it that case this could make a substantial difference in the quality of the sampling.
transformers
2,302
closed
Repetition penalty work falsely in case the logit of the token is negativ
## 🐛 Bug <!-- Important information --> Model I am using (LMHeadModels; distilgpt2 in this example but holds true for all LMHeadModels): Language I am using the model on English: The problem arise when using: * [ ] the official example scripts: (give details) * [x] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: language genaration ## To Reproduce Run the following code: ``` input_sentence = 'The dog' tokenizer = AutoTokenizer.from_pretrained('distilgpt2') model = AutoModelWithLMHead.from_pretrained('distilgpt2') input_ids = torch.tensor(tokenizer.encode(input_sentence)).unsqueeze(0) outputs = model.generate(input_ids=input_ids, do_sample=True, bos_token_id=tokenizer.bos_token_id, eos_token_ids=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, repetition_penalty=1.5) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Expected behavior Output: `"The dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog"` In the output, the word dog is repeated multiple times. It can be noticed that the higher the `repetition_penalty`, the more likely already occurring words are to be repeated. Thus, the penalty achieves exactly the opposite of what it is supposed to do. ## Environment * OS: Linux * Python version: 3.6.8 * PyTorch version: 1.2.0 * PyTorch Transformers version (or branch): main branch v.2.3.0 * Using GPU ? No * Distributed of parallel setup ? No * Any other relevant information: ## Additional context The reason for this behavior can be understood when looking at line https://github.com/huggingface/transformers/blob/81db12c3ba0c2067f43c4a63edf5e45f54161042/src/transformers/modeling_utils.py#L731 : If the logit `next_token_logits[i, previous_tokens]` is < 0, then dividing by a number > 1 is actually going to increase the probability of sampling that token the next time instead of reducing it.
12-24-2019 16:18:14
12-24-2019 16:18:14
Propsed fix in PR #2303
transformers
2,301
closed
Can I use run_lm_finetuning.py for training models in an uncovered language?
Is it possibile to use run_lm_finetuning.py script to train one of the models from scratch in a language not covered by the available pretrained models? (like spanish, italian, german). My idea is to replicate something like camemBERT for a language different from french, given that I have the corpora needed for the training. What are some suggestions that you could give me? What are the changes to make in the script in order to run it correctly for this purpose? How can I deal with a corpus of ~150GB? Thanks for any help
12-24-2019 13:27:03
12-24-2019 13:27:03
You can now leave `--model_name_or_path` to None in `run_language_modeling.py` to train a model from scratch. See also https://huggingface.co/blog/how-to-train
transformers
2,300
closed
run_ner.py RobertaForTokenClassification.from_pretrained "size mismatch for classifier.bias"
## ❓ Questions & Help I have a trouble on run_ner.py([https://github.com/huggingface/transformers/blob/master/examples/run_ner.py](url)) evaluation using **Roberta**. My error is in this snippet: ```python # Load pretrained model and tokenizer if args.local_rank not in [-1, 0]: torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab args.model_type = args.model_type.lower() config_class, model_class, tokenizer_class = MODEL_CLASSES[args.model_type] config = RobertaConfig.from_pretrained(args.config_name if args.config_name else args.model_name_or_path, num_labels=num_labels, cache_dir=args.cache_dir if args.cache_dir else None) tokenizer = RobertaTokenizer.from_pretrained(args.tokenizer_name if args.tokenizer_name else args.model_name_or_path, do_lower_case=args.do_lower_case, cache_dir=args.cache_dir if args.cache_dir else None) model = RobertaForTokenClassification.from_pretrained(args.model_name_or_path, from_tf=bool(".ckpt" in args.model_name_or_path), config=config, cache_dir=args.cache_dir if args.cache_dir else None) ``` If I run with both training and evaluation, it works fine. If I want only to evaluate my model I get this error: ``` Traceback (most recent call last): File "run_pos.py", line 560, in <module> main() File "run_pos.py", line 477, in main cache_dir=args.cache_dir if args.cache_dir else None) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 479, in from_pretrained model.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for RobertaForTokenClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([19, 768]) from checkpoint, the shape in current model is torch.Size([18, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([19]) from checkpoint, the shape in current model is torch.Size([18]). ``` I used a different set of labels (18 labels rather than CoNLL format) passing them through the flag `--labels /path/to/labels.txt`. As you can see, when it loads the model, it sees 19 labels and not the expected 18. I think the 19th is added during the training to tag the subwords. In particular it should be: ```python # Use cross entropy ignore index as padding label id so that only real label ids contribute to the loss later pad_token_label_id = CrossEntropyLoss().ignore_index ``` I don't know if I have to remove a label from its mapping (how to do it?) or if there are other solutions. I also don't know why this error doesn't occur if I train and evaluate sequentially in the same process. Thank you!
12-24-2019 12:27:36
12-24-2019 12:27:36
transformers
2,299
closed
Model2Model inference
## ❓ Questions & Help I am trying to implement a simple Model2Model question-answer task, where the input is question and the answer needs to be generated. At inference time I feed in the "[CLS]" token to let it generate but it only generates a single token instead of the entire sentence. The ppl at which I do the inference is ~5 ppl on valid set. Is there a fundamental issue with my model? Training time `outputs = model(input_ids, batch['speakableAnswer'], decoder_lm_labels=batch['speakableAnswer'])` print(outputs.size() -> batch x seq_len x vocab_size) Test time `outputs = model(input_ids, batch['speakableAnswer'])` print(outputs.size() -> batch x 1 x vocab_size) <!-- A clear and concise description of the question. -->
12-24-2019 08:58:54
12-24-2019 08:58:54
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,298
closed
Why cosine similarity of BERT, ALBERT, Robert is so big, almost near 1.0?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I tried to use bert models to do similarity comparison of words/sentences, but I found that the cosine similarities are all very high, even for very different words/sentences in meaning. Why? Does all the vector are located in a small portion the vector-space?
12-24-2019 08:28:37
12-24-2019 08:28:37
BERT was not designed to produce useful word / sentence embeddings that can be used with cosine similarities. Cosine-similarity treats all dimensions equally which puts high requirements for the created embeddings. BERT as not intended for this. See this post by Jacob Devlin: https://github.com/UKPLab/sentence-transformers/issues/80#issuecomment-565388257 If you want to use BERT with cosine similarities, you need to fine-tune it on suitable data. You can find data, code and examples in our repository: https://github.com/UKPLab/sentence-transformers<|||||>@nreimers I have read your paper, it's great and thanks for the answer!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,297
closed
RunTimeError in "run_summarization": expected device cuda:0 and dtype byte but got device cuda: 0 and dtype Bool
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): bertabs-finetuned-cnndm-extractive-abstractive-summarization-pytorch_model Language I am using the model on (, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) run_summarization.py in examples The tasks I am working on is: * [ ] an official GLUE/SQUaD task: summarization ## To Reproduce Steps to reproduce the behavior: 1. python run_summarization.py --documents_dir .\data --sumaries_output_dir .\output <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: Win10 * Python version: 3.6.1 * PyTorch version:1.2.0 * PyTorch Transformers version (or branch): master at 24 Dec * Using GPU ? Yes * Distributed of parallel setup ? No * Any other relevant information: ## Additional context errors come from modeling_berabs.py, line 328, in forward
12-24-2019 07:53:37
12-24-2019 07:53:37
Someone suggested to use Pytorch v1.1.0 instead of 1.2.0. But not sure if it is ok. <|||||>it's a version inconsistent issue. in ver 1.1.0, torch.gt outputs: torch.gt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]])) tensor([[ 0, 1], [ 0, 0]], dtype=torch.uint8) while in ver 1.2.0, it outputs: >>> torch.ge(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]])) tensor([[True, True], [False, True]]) <|||||>Please see the pull request #2369
transformers
2,296
closed
A question about BERT position embedding.
## ❓ Questions & Help I noticed that in Transformers, it turn position_ids into position_embedding through `nn.Embedding(config.max_position_embeddings, config.hidden_size)` Here `config.max_position_embeddings` is 512 ,and `config.hidden_size` is 768. So, when I input a sentence shorter than 512 ,such as "Today is a nice day", will this sentence's position embedding still be 512? Or just as long as position_ids? Here is 7 with [CLS] and [SEP].
12-24-2019 03:26:42
12-24-2019 03:26:42
The sentence "Today is a nice day" will already be padded to 512 because of the tokenization process, so the input will remain 512 in the embedding layer<|||||>> The sentence "Today is a nice day" will already be padded to 512 because of the tokenization process, so the input will remain 512 in the embedding layer Thanks a lot!
transformers
2,295
closed
How do you handle large documents?
## ❓ Questions & Help I have been a huge fan of this library for a while now. I've used it to accomplish things like sentence classification, a chat bot, and even stock market price prediction, this is truly a fantastic library. But I have not yet learned how to tackle large documents (e.g. documents 10x the size of the model's max length). An example. A task I would love to accomplish is document abstraction, however the documents I am dealing with are upwards of 3,000+ words long and I'm afraid that taking the first 512 or 768 tokens will not yield a quality summary. One idea that I was kicking around, but have not put code to yet, involved taking a window of 512-tokens to produce a model output and then repeating this process, shifting the window of 512-tokens, until I have covered my entire corpus. Then I will repeat the process until I have an input that can fit into my model. There must be a better way. I have heard of developers using these NLP models to summarize large legal documents and legislation, which can be hundreds of pages, let alone thousands of words. Am I missing something, am I overthinking this problem?
12-24-2019 02:06:55
12-24-2019 02:06:55
apparently the answer may be to feed smaller sequences of tokens and use the past input keyword itn pytorch models or hidden states in tensorflow. models both this past input and the stateful nature of models aren't documented. it would be interesting to have methods to manage big inputs<|||||>Recent models like Transformers-XL and XLNet already support longer sequences. Although, the available pretrained models are imho only using 512 tokens. Some additional pointers: - Long-form document classification with BERT. [Blogpost](https://andriymulyar.com/blog/bert-document-classification), [Code](https://github.com/AndriyMulyar/bert_document_classification) - See ICLR 2020 reviews: - [BERT-AL: BERT for Arbitrarily Long Document Understanding](https://openreview.net/forum?id=SklnVAEFDB) - [Blockwise Self-Attention for Long Document Understanding](https://openreview.net/forum?id=H1gpET4YDB) - [Easy-to-use interface to fine-tuned BERT models for computing semantic similarity](https://github.com/AndriyMulyar/semantic-text-similarity) - Ye, Z. et al. 2019. BP-Transformer: Modelling Long-Range Context via Binary Partitioning. (2019). [Paper](https://arxiv.org/pdf/1911.04070.pdf) [Code](https://github.com/yzh119/BPT) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>There are two main methods: - Concatenating 'short' BERT altogether (which consists of 512 tokens max) - Constructing a real long BERT (CogLTX, Blockwise BERT, Longformer, Big Bird) I resumed some typical papers of BERT for long text in this post : [Paper Dissected and Recap #4 : which BERT for long text ?](https://lethienhoablog.wordpress.com/2020/11/19/paper-dissected-and-recap-4-which-bert-for-long-text/) You can have an overview of all methods there.<|||||>@lethienhoa does `all long BERT` is capable of any length text?
transformers
2,294
closed
Is there any efficient way to convert BERT outputs to fit token-level tasks?
Say I have a sentence consisting of two words: S = [“Definitely”, “not”], and what I want is to transfer S into an embedding matrix T with a size of (2, 100), where each row represents a word. I want to adopt BERT embeddings. But in BERT, each word is represented as a sub-word unit. This means that S will be represented as [“Def”, “##in”, “##ite”, “##ly”, “not”] ( “Definitely” is tokenized as “Def”, “##in”, “##ite”, “##ly”). BERT will output an embedding matrix H with a size of (5, 100) :(. My goal is to merge some rows of H according to the sub-word units. For example, for “Definitely”, I should merge the embeddings of [“Def”, “##in”, “##ite”, “##ly”] to get its presentation. In my current method, I use a head mask vector h = [1, 0, 0, 0, 1] to record the “head” of each word, where 1 indicates the head position: h = [ 1, -> “Def” 0, -> “##in” 0, -> “##ite” 0, -> “##ly” 1 -> “not” ] So I should merge rows which have a head mask of 0 to that having a head mask of 1. I have to use the `for` computation to enumerate each element in h, which is slow and can not batchfy. Is there any efficient method to do the above computation?
12-24-2019 01:54:33
12-24-2019 01:54:33
i am also curious about it. ;; https://github.com/dsindex/etagger/blob/master/feed.py this is not efficient way. just pooling.<|||||>> i am also curious about it. ;; > https://github.com/dsindex/etagger/blob/master/feed.py > this is not efficient way. just pooling. Hi, I think I get the solution. The problem can be solved via simple matrix computation. For example, let the BERT representation of the above example be B, which has a size of (5, 100). We should first construct a matrix according to the sub-words: m = [ 1, 1, 1, 1, 0 0, 0, 0, 0, 1] Then we can simply compute m.dot(B), which is exactly the result. <|||||>I have solved this issue.
transformers
2,293
closed
Train custom NER model with new Pipeline
## 🚀 Feature New `Pipelines` feature is great! I am wondering whether it will be possible to implement a pre-training on domain specific data (similar to ULMFiT approach, unsupervised encoder-decoder) and then train a custom NER model with annotated data (similar to spaCy)?
12-23-2019 23:39:26
12-23-2019 23:39:26
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,292
closed
Add cached past for language generation
add past input for gpt2 and ctrl for faster decoding for language generation. 1. add `prepare_inputs_for_generation` fn for gpt2 and ctrl 2. add private `_do_output_past` fn for PretrainModel class to check whether model outputs past key-value states - fn only covers cases for gpt2 and ctrl for the moment and needs to add 'xlnet' and 'transfo_xl' via `mem_len` - might be better to move `_do_output_past` to each individual LMHeadModel 3. rename `pasts` to `past` can also add dummy tests for language generation
12-23-2019 22:10:35
12-23-2019 22:10:35
That a great idea, @patrickvonplaten I'll let you finish this PR when you have time and ping me for review or questions.<|||||>True, I will implement this tomorrow!<|||||>tested for transfo_xl, gpt2, openai-gpt and xlnet in combination with PR #2289 <|||||>This looks great! To pass the code quality test, you can use `make style`. Please read this section of the (new) CONTRIBUTING guidelines: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=h1) Report > Merging [#2292](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aeef4823ab6099249679756182700e6800024c36?src=pr&el=desc) will **decrease** coverage by `0.17%`. > The diff coverage is `8.51%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2292/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2292 +/- ## ========================================== - Coverage 73.49% 73.32% -0.18% ========================================== Files 87 87 Lines 14793 14833 +40 ========================================== + Hits 10872 10876 +4 - Misses 3921 3957 +36 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `72.9% <0%> (-0.37%)` | :arrow_down: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `83.17% <16.66%> (-1.3%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.22% <16.66%> (-2.13%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `74.68% <20%> (-0.59%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.44% <3.84%> (-2.02%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=footer). Last update [aeef482...fc84bd5](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ok good for now, merging