repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
487
closed
BERT multilingual for zero-shot classification
Hi! I'm interested in solving a classification problem in which I train the model on one language and make the predictions for another one (zero-shot classification). It is said in the README for the multilingual BERT model (https://github.com/google-research/bert/blob/master/multilingual.md) that: > For tokenization, we use a 110k shared WordPiece vocabulary. The word counts are weighted the same way as the data, so low-resource languages are upweighted by some factor. We intentionally do not use any marker to denote the input language (so that zero-shot training can work). But after finetuning the BERT-multilingual-uncased for one language dataset, it absolutely doesn't work for the texts in another languages. Predictions turn out to be inadequate: I tried multiple pairs `(text, the same text translated to another language)` and probability distributions over labels (after apply softmax) were wildly different. Do you know what can be the cause of the problem? Should I somehow change the tokenization when applying the model to other languages (BPE embeddings are shared, so not sure about this one)? Or should I use multilingual-cased instead of multilingual-uncased (is it possible it can be the source of the problem)?
04-14-2019 11:39:34
04-14-2019 11:39:34
UPD. I tried it with bert-multilingual-cased, but the results are still bad. A number of very simple (text, translated text) give very different probability distributions (the translated versions almost always fall into one major category). Specifiically, **I fine-tune pre-trained bert-multilingual-cased on Russian text classification problem and then make a prediction using the model on an English text** (tried other languages -- nothing works).<|||||>Hi, my feeling is that this is still an open research problem. [Here](https://twitter.com/nlpmattg/status/1091367511117881345) is a recent thread discussing the related problem of fine-tuning BERT on English SQuAD and trying to do QA in another language. Maybe you can get a pre-print from the RecitalAI guys if they haven't published it yet.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
486
closed
Difference between this repo and bert-as-service
Hi, I wondered if anybody knows the difference between the `BertModel` of this repo and [bert-as-service](https://github.com/hanxiao/bert-as-service). 1. I cannot get the same result between these two even if I use the same checkpoint. pytorch-pretrained-BERT yield a lower acc and slower convergence. 2. The memory usage of `BertModel` seems much higher than bert-as-service. With the same batch-size=32, max_seq_len=100, the bert-as-service will take about 8000MB but `BertModel` will cost more than 16000MB because I got an OOM issue. Does any one knows the reason behind it? Is there any optimization done for bert-as-service?
04-13-2019 18:44:59
04-13-2019 18:44:59
Yes there are some optimizations. Please take a look here: https://hanxiao.github.io/2019/01/02/Serving-Google-BERT-in-Production-using-Tensorflow-and-ZeroMQ/#engineering-building-a-scalable-service<|||||>Hi, there is no specific relation between the present repo (which provides PyTorch implementations of several transformer's models) and bert-as-a-service (which is built on top of BERT TensorFlow implementation AFAICT). We want to keep the code simple in the present repo and don't provide specific optimizations out-of-the-box but if you want to add some of bert-as-a-service optimizations, here are the most straightforward ones: - use `with torch.no_grad()` to avoid computing gradient during evaluation (should divide memory consumption by two) - use fp16 code to get another factor of 2, probably the easiest is to use the amp wrapper in NVIDIA's apex library, see the details [here](https://github.com/NVIDIA/apex).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
485
closed
UnboundLocalError: local variable 'i' referenced before assignment when using fine_tuning code
Hi @thomwolf I am using the lm_finetuning codes. Generated training data using generate_pretraining_data.py When running finetune_on_pregenerated.py . I am getting this error. logs python finetune_on_pregenerated.py --pregenerated_data training_1/ --bert_model bert-base-uncased --do_lower_case --output_dir finetuned_lm_1Msents_3epochs/ --epochs 3 Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex. 2019-04-13 09:32:43,014: device: cuda n_gpu: 4, distributed training: False, 16-bits training: False 2019-04-13 09:32:43,361: loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /home/cloud/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 2019-04-13 09:32:43,676: loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /home/cloud/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 2019-04-13 09:32:43,677: extracting archive file /home/cloud/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmprmsendyc 2019-04-13 09:32:48,700: Model config { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 30522 } 2019-04-13 09:33:17,672: ***** Running training ***** 2019-04-13 09:33:17,672: Num examples = 0 2019-04-13 09:33:17,672: Batch size = 32 2019-04-13 09:33:17,672: Num steps = 0 2019-04-13 09:33:17,674: Loading training examples for epoch 0 Training examples: 0it [00:00, ?it/s] Traceback (most recent call last): File "finetune_on_pregenerated.py", line 333, in <module> main() File "finetune_on_pregenerated.py", line 286, in main num_data_epochs=num_data_epochs) File "finetune_on_pregenerated.py", line 101, in __init__ assert i == num_samples - 1 # Assert that the sample count metric was true UnboundLocalError: local variable 'i' referenced before assignment
04-13-2019 09:58:13
04-13-2019 09:58:13
I found out the issue. The text corpus I was using is just one document. So the code is for two or more documents only?<|||||>Yes only for multiple documents. We have a test now to check that since #478 thanks to @Rocketknight1.
transformers
484
closed
KeyError: in convert_tokens_to_ids()
In BertTokenizer's, [convert_tokens_to_ids](https://github.com/huggingface/pytorch-pretrained-BERT/blob/19666dcb3bee3e379f1458e295869957aac8590c/pytorch_pretrained_bert/tokenization.py#L117) function gives KeyError. So, I suggest to modify the **for loop** in the function as follows. ``` for token in tokens: ids.append(self.vocab.get(token, self.vocab['[UNK]'])) ```
04-13-2019 09:04:53
04-13-2019 09:04:53
Hi @wasiahmad, This should actually already been taken care of by the WordPieceTokenizer ([here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/19666dcb3bee3e379f1458e295869957aac8590c/pytorch_pretrained_bert/tokenization.py#L357)). Do you have a simple example to share so I can try to reproduce the behavior?<|||||>Hi @thomwolf , I alse found the same question when using BertTokenizer class. Code likes: ``` sent = ['hi', 'mary'] tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model_bert = BertModel.from_pretrained('bert-base-cased') tokenizer.convert_tokens_to_ids(sent) ```<|||||>Hi @thomwolf I didn't record that token for which I encountered that error but that token was not an English word, it was more like a weird symbol which was probably able to bypass WordPieceTokenizer. I was using BERT as a feature extractor for MSMARCO v2 QA dataset when I encountered the error.<|||||>@congjianluo you should not tokenize the sentence your self but use Bert tokenizer. Please follow the usage example in the [readme](https://github.com/huggingface/pytorch-pretrained-BERT#bert). In your case: ```python sent = "hi mary" tokenizer = BertTokenizer.from_pretrained('bert-base-cased') tokens = tokenizer.tokenize(sent) tokens_ids = tokenizer.convert_tokens_to_ids(tokens) ```<|||||>@wasiahmad ok then I'm closing this issue. Feel free to re-open it if you have a (reproductible) example of such issue.
transformers
483
closed
Perplexity number of wikitext-103 on gpt-2 don't match the paper
Hi, The reported perplexity number of gpt-2 (117M) on wikitext-103 is 37.5. However when I use the pre-trained tokenizer for gpt-2 `GPT2Tokenizer` using: `tokenizer = GPT2Tokenizer.from_pretrained('gpt2')` to tokenize wikitext-103, and then evaluate it using the pre-trained 117M gpt-2 model, I get a ppl of 48.4 Note: I have added newlines instead of EOS tags at the end of each line read. I've also normalized the loss by the number of tokens originally in wikitext-103 as mentioned by Alec Radford at https://github.com/openai/gpt-2/issues/78 Could you please let me know whats wrong here?
04-12-2019 21:10:39
04-12-2019 21:10:39
Are you using context length 256 by chance? On WikiText-2 I see I see 29.17378 using context length 1024 and 43.3393707 using context length 256 This is close to OpenAI report result of 29.41 (although I don't get why it doesn't match it exactly) <|||||>@yaroslavvb No, I'm using a context length of 1024. Although, when I evaluate it on wikitext-2, my numbers dont match with yours. Would it be possible for you to share what you've done? I believe that you might have normalized the loss by the number of tokens after the gpt-2 tokenization, and not by the number of tokens originally in wikitext-2. <|||||>Here's our loss [computation](https://github.com/cybertronai/bflm/blob/b6ba6d97c9ccdf2b12e104fbdcd0bed25ada7b68/train_gpt2.py#L163) We don't normalize the loss here, just take the average output of `model(batch, lm_labels=batch)` where model is `GPT2LMHeadModel.from_pretrained(args.model_name_or_path)` Note wikitext-103 and wikitext-2 have the same test set, so you should get `29.17378` perplexity on either one<|||||>@Akhila-Yerukola when we tried normalizing by the original token count, our ppl number got much further from the paper. I ran `wikitext-2-raw/wiki.train.raw` through the encoder and got 2417793 tokens. Then I split the original on spaces and got 2088678. ```py math.exp(math.log(29.41) * 2088678 / 2417793) # 18.56 ``` We got 30.4 ppl unnormalized and 28 after [detokenizing](https://github.com/cybertronai/bflm/blob/master/detokenizer.py), so we're still quite far from 18.56. Another possibility is that for evaluation, instead of chunking the dataset and shoving it through, they pass it in 1 token at a time the same way they do for sample generation. This would significantly reduce the loss because the model wouldn't "forget" what it was doing at the beginning of each sequence. I'll try this later today.<|||||>As a quick test, I calculated loss based on only the last 10 tokens of the sequence without changing the chunking. I got test loss of 23.9 and train 24.9.<|||||>@yaroslavvb @8enmann I still think the numbers don't match the paper. The Table 3 in the paper which contains Zero-shot results on many datasets has a "SOTA" row which correspond to the reported perplexity numbers on the test sets of the data sets. (I verified this for wikitext-2 and wikitext-103) When I run [this](https://github.com/cybertronai/bflm/blob/b6ba6d97c9ccdf2b12e104fbdcd0bed25ada7b68/train_gpt2.py) evaluation script, I get ppl of 23.85 on wikitext-2 (not raw), 28.186 on wikitext-103 (not raw), 29.01 on wikitext-103-raw (and wikitext-2-raw). None of these match the reported 117M gpt-2 model (which is the model available) numbers from the paper (29.41 for wikitext-2, 37.5 for wikitext-103) From the looks of it wikitext-103 is way off.<|||||>You're right, our results are very different from the paper. Perhaps they used the detokenized non-raw version and also for the labels passed -1 for <UNK> tokens, which causes them not to be evaluated for loss. This, combined with the 1 at a time trick I mentioned 2 days ago, could bring PPL down to ~18. However that still leaves a big puzzle as to why the loss they reported on wikitext-103 is so much worse than wikitext-2.<|||||>Related issue, Lambada numbers also don't match the paper (31% instead of reported 46%) https://github.com/huggingface/pytorch-pretrained-BERT/issues/491<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> @yaroslavvb @8enmann > When I run [this](https://github.com/cybertronai/bflm/blob/b6ba6d97c9ccdf2b12e104fbdcd0bed25ada7b68/train_gpt2.py) evaluation script, I get ppl of 23.85 on wikitext-2 (not raw), 28.186 on wikitext-103 (not raw), 29.01 on wikitext-103-raw (and wikitext-2-raw). None of these match the reported 117M gpt-2 model (which is the model available) numbers from the paper (29.41 for wikitext-2, 37.5 for wikitext-103) Is it possible to know why 0.7 and 0.3? --> 0.7 * exp_average_loss + 0.3 * loss.item() <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This discussion is old, so this may not be applicable anymore. But I'd like to offer a data point if it's still the case. I tried gpt2 and gpt2-medium on OpenWebText (tokenized with HuggingFace's corresponding tokenizer settings), and I got the ppl about 24 and 18, respectively, whereas the openai version of them is 17 and 13, respectively. This is good enough to say that I probably didn't make any catastrophic mistake, but there still is some gap, which may or may not explain the performance gap on other datasets.
transformers
482
closed
Suggestion: exception handling for out-of-vocab in pretrained model
Dear concern, I appreciate your favor for public implementation. As you know, all NLP people have an interest in applying your gorgeous model to every NLP problem. I am writing this to suggest to add exception handling or warning message about out-of-vocabulary when a pretrained model is used. I had been suffered by an error message for two days with checking every my code, naturally, I had believed it is my programming problem again :) Then I discovered that it is out-of-vocab problem. The pretrained BERT model has vocab size 28996, but my dataset with BERT subword tokenizer has more than 30000 vocab. Then it is an error, when the BERT model meets an index higher than 28996. If we know the problem, a solution is quite simple, filtering or replacing it to UNK index. Error messages are caused from somewhat embedding parts in CPU mode, but it was unclear to identify GPU mode, thus other people might be suffered like me. I know the pretrained model is not designed for application, long documents with unusual vocabulary. However, I believe many people will appreciate your favor if you add exception handling or warning message. If you already have recognized the issue, please ignore this message :) Best Sungho
04-12-2019 20:58:40
04-12-2019 20:58:40
Hi Sungho, This should already be taken care of by the BertTokenizer which defaults on the `unk` token when a sub-word is not in the vocabulary (see [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization.py#L372)). What BERT model are you using? Do you have a simple example you can share so I can try to reproduce the behavior?<|||||>Hi Tomas, It is really shame to describe, I think it is just my typo problem again :( Please ignore this issue. You are absolutely right, there is an exception handling. Just I loaded a wrong model in the BertModel after tokenizing. "bert-base-cased" in the model part, Instead of "bert-base-uncased", which has a smaller vocab size than "uncased". I am so sorry to waste your time, and appreciate your favor again! :) Best Sungho<|||||>Ok make sense, good to know. So there is no problem in the end?<|||||>Yep, only my embarrassing here :) Though there is no performance change, but I believe it is not a problem of pretrained model at all, but just different application. Thus, no problem for implementation at all, I appreciate it!<|||||>Ok, let's close the issue then. Feel free to open a new one if you have other problems.<|||||>May I know why cased vocabulary is smaller than uncased?
transformers
481
closed
BERT does mask-answering or sequence prediction or both???
I'm working on a exciting project but need to know something fast. I seen BERT adds a word at the end of your input, like sequence prediction, elongating your input text. But I read that BERT has been trained at (and is a pro at) filling in the blank word mask, and in fact CANNOT do sequence prediction at the end of one's input (contradiction!!). When BERT adds a fill-in answer where the mask sits, it goes in context. I want a tool that returns me a similar word in context (the mask ANSWER!!), but the colab tool I tried is not a mask/answer giver, ..it adds words at the endddd of my input lol!!!
04-12-2019 18:54:42
04-12-2019 18:54:42
I'm not sure I understand your question. But one thing BERT can do is mask-answering indeed (guessing a word in the middle of a sentence). BERT is quite bad at doing sequence prediction at the end of an input because it's not trained on partial sentences.<|||||>When BERT fills-in a MASK, does it always use a answer it has seen or does it generalize and create never-seen Q-As? Ex. "The *dog* in the hat, by Dr. Souse the Cat"<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
480
closed
Extend the BertForSequenceClassification docs to mention the special CLS token.
04-12-2019 18:32:46
04-12-2019 18:32:46
Ok, let's go for that @mboyanov!<|||||>Great!
transformers
479
closed
Using GPT2 to implement GLTR
How can we use this GPT2 model to create the basic functionality of the GLTR tool? Like getting probabilities for each word in a sequence.?
04-12-2019 18:23:43
04-12-2019 18:23:43
the allen institute tool may help you....it has probabilities of next 10 words for when adding a next word....i think the prorbablities are shown....may just be the 10 words but the git code is available i think so may be what you want.<|||||>The output of `GPT2LMHeadModel` are logits so you can just apply a softmax (or log-softmax for log probabilities) on them to get probabilities for each token. Then if you want to get probabilities for words, you will need to multiply (or add if you used a log-softmax) the probabilities of the sub-words in each word. Maybe @sebastianGehrmann you have some additional insights (or plan to release the code of GLTR, it's a great demo!)?<|||||>Thanks for pinging me @thomwolf. GLTR actually uses this amazing repo in the backend, so you could just use our code in conjunction. We are still working on the docu, but you can find all our code here: https://github.com/HendrikStrobelt/detecting-fake-text <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
478
closed
Added a helpful error for users with single-document corpuses
This adds the helpful error message suggested in #452 for users trying to do language model fine-tuning with one long document as a corpus, and replaces some of the `randint()` calls with equivalent cleaner `randrange()` ones.
04-12-2019 14:12:44
04-12-2019 14:12:44
Looks good to me, thanks @Rocketknight1
transformers
477
closed
Getting Sentence level log probabilities using this model
So I was trying to implement the SLOR score, using the transformer-XL, mostly avoiding training. But given the XL model and the sentence, how could i go about getting the sentence level log probability? Attached is the formula I'm trying to implement. ![image](https://user-images.githubusercontent.com/18056781/56021217-81d6f700-5d26-11e9-8c46-9b88f0277398.png) Thanks a lot for your help.
04-12-2019 07:56:56
04-12-2019 07:56:56
So p_M(S) is just the output of the model right? For p_u(S), I think the easiest is probably to use the empirical probabilities. `TransfoXLTokenizer` has a counter to store words frequencies [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization_transfo_xl.py#L98) which should be populated in the "pretrained" tokenizer so I would use and normalize this to get unconditional probabilities for each word and then compute SLOR.<|||||>@thomwolf So the code would look something like : ```sentence = "Where is the dog going" s_ids = convert_sentence_to_ids(sentence) config = TransfoXLConfig() model = TransfoXLModel(config) last_hidden_state, new_mems = model(input_ids) for i in sentence : uni_prob = freq(token)/no_of_tokens sent_uni_prob * = uni_prob SLOR = (1/len(sentence))* last_hidden_state - sent_uni_prob ``` or am i mistaken about the idea that the last_hidden_state is the sentence probability i'm looking for<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>In case someone found this, I found an implementation that seems legit here https://github.com/sgondala/GoogleConceptualCaptioning/commit/981e1b61ca5f84052f6237402319714d7fe70b80.
transformers
476
closed
cannot run squad script
(g-torch) [bipkg@SVR16173HP380 examples]$ python run_squad.py --bert_model bert-base-uncased --train_file squad/train-v1.1.json --do_train --output_dir exps2 04/12/2019 11:32:03 - INFO - __main__ - device: cuda n_gpu: 1, distributed training: False, 16-bits training: False 04/12/2019 11:32:03 - WARNING - pytorch_pretrained_bert.tokenization - The pre-trained model you are loading is an uncased model but you have set `do_lower_case` to False. We are setting `do_lower_case=True` for you but you may want to check this behavior. 04/12/2019 11:32:05 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /home/bipkg/data/jgeng/Bert/pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 04/12/2019 11:32:16 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /home/bipkg/data/jgeng/Bert/pytorch_pretrained_bert/distributed_-1/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 04/12/2019 11:32:16 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /home/bipkg/.pytorch_pretrained_bert/distributed_-1/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpj0z3ai2_ 04/12/2019 11:32:20 - INFO - pytorch_pretrained_bert.modeling - Model config { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 30522 } 04/12/2019 11:32:25 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.weight', 'qa_outputs.bias'] 04/12/2019 11:32:25 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] 04/12/2019 11:32:42 - INFO - __main__ - ***** Running training ***** 04/12/2019 11:32:42 - INFO - __main__ - Num orig examples = 87599 04/12/2019 11:32:42 - INFO - __main__ - Num split examples = 88641 04/12/2019 11:32:42 - INFO - __main__ - Batch size = 32 04/12/2019 11:32:42 - INFO - __main__ - Num steps = 8211 Epoch: 0%| | 0/3 [00:00<?, ?it/s]Segmentation fault (core dumped) | 0/2771 [00:00<?, ?it/s]
04-12-2019 03:37:33
04-12-2019 03:37:33
What hardware are you using? What is your software configuration (versions of python, pytorch, pytorh-pretrained-bert, etc...)?<|||||>I'm not so sure that this is the same problem that I experienced. If you installed apex, the problem may be related to BertLayerNorm, because BertLayerNorm will use FusedLayerNorm of apex library. Thus, if you disable to use FusedLayerNorm of apex library, then the segmentation fault will disappear. The other solution is that you should reinstall apex. Please refer to https://github.com/NVIDIA/apex/issues/156#issuecomment-465301976.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@cartopy reinstall apex works well thx<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
475
closed
Non-Determinism Behavior that cannot reproduce result when evaluate on each epoch
I modified the example file `run_classifier.py` a little bit, so that the model could evaluate after each training epoch and save each evaluation results on file. This is good for someone who wants to see how the training epoch number influence the result. It is good to simply set the train_epoch = 50, save a checkpoint model on after each epoch so that you could choose the best model for testing data, based on the evalution results on dev dataset. The strange phenomena and my question: (1) even though the seed is correctly set for both Numpy and PyTorch, when evaluating on each step, the result is different from when I only have the final evaluation. (2) set the training epoch to a different number, say epoch 3 and epoch 10. The intermediate result of epoch 10 on step 3, is different from the epoch 3 results. (3) In the code I found that `num_train_optimization_steps = int(len(train_examples) / args.train_batch_size / args.gradient_accumulation_steps) * args.num_train_epochs`. When `args.gradient_accumulation_steps=1, args.num_train_epochs=1`, let's simply it into `num_train_optimization_steps = int(len(train_examples) / args.train_batch_size))`. I think it should be `num_train_optimization_steps = int((len(train_examples) - 1) / args.train_batch_size) + 1`. If train_examples == 99, train_batch_size=32, it will gives different result. Does it matters in the original code, it is passed into `BertAdam()`? Could you guys have a try and see what's wrong with my code and what cause the problem? Thank you very much. Here is the modified example code and run_script are in the attachment. ### results: #### [a] should be exactly the same #### [b] should be exactly the same ======= epoch 3 final evaluation: {'eval_loss': 0.7023529836109706, 'eval_accuracy': 0.44, 'train_loss': 0.6765455901622772, 'global_step': 6} ==> [a] epoch 3 evaluation every epoch: {'eval_loss': 0.6758423532758441, 'eval_accuracy': 0.6, 'train_loss': 0.8333011567592621, 'global_step': 2} {'eval_loss': 0.6573205164500645, 'eval_accuracy': 0.58, 'train_loss': 0.7199544608592987, 'global_step': 4} {'eval_loss': 0.662639707326889, 'eval_accuracy': 0.58, 'train_loss': 0.70155930519104, 'global_step': 6} ==> [a] epoch 10 final evaluation: {'eval_loss': 0.8280548453330994, 'eval_accuracy': 0.58, 'train_loss': 0.8153044879436493, 'global_step': 20} ==> [b] epoch 10 evaluation every epoch: {'eval_loss': 0.6604576451437814, 'eval_accuracy': 0.58, 'train_loss': 0.8333011567592621, 'global_step': 2} {'eval_loss': 0.6526826364653451, 'eval_accuracy': 0.58, 'train_loss': 0.8114463090896606, 'global_step': 4} {'eval_loss': 0.6567909887858799, 'eval_accuracy': 0.58, 'train_loss': 0.6695355176925659, 'global_step': 6} ==> [a] {'eval_loss': 0.6620746084621975, 'eval_accuracy': 0.62, 'train_loss': 0.6175626814365387, 'global_step': 8} {'eval_loss': 0.6602040699550084, 'eval_accuracy': 0.52, 'train_loss': 0.5784901082515717, 'global_step': 10} {'eval_loss': 0.667422890663147, 'eval_accuracy': 0.54, 'train_loss': 0.5177579522132874, 'global_step': 12} {'eval_loss': 0.6945722614015851, 'eval_accuracy': 0.52, 'train_loss': 0.5649124383926392, 'global_step': 14} {'eval_loss': 0.7062868390764508, 'eval_accuracy': 0.48, 'train_loss': 0.7067148089408875, 'global_step': 16} {'eval_loss': 0.7712458883013044, 'eval_accuracy': 0.46, 'train_loss': 0.7441326081752777, 'global_step': 18} {'eval_loss': 1.5845262237957545, 'eval_accuracy': 0.42, 'train_loss': 1.0378091633319855, 'global_step': 20} ==> [b] ### run the bash script the second time, it is the same as the previous result. It is deterministic somehow(the result for final evaluation is the same as the final evaluation for the previous run. So as to eval_on_each_epoch), but the difference because of train_epoch num is the same. epoch 3 final evaluation {'eval_loss': 0.7023529836109706, 'eval_accuracy': 0.44, 'train_loss': 0.6765455901622772, 'global_step': 6} epoch 3 evaluation every epoch {'eval_loss': 0.6758423532758441, 'eval_accuracy': 0.6, 'train_loss': 0.8333011567592621, 'global_step': 2} {'eval_loss': 0.6573205164500645, 'eval_accuracy': 0.58, 'train_loss': 0.7199544608592987, 'global_step': 4} {'eval_loss': 0.662639707326889, 'eval_accuracy': 0.58, 'train_loss': 0.70155930519104, 'global_step': 6} epoch 10 final evaluation {'eval_loss': 0.8280548453330994, 'eval_accuracy': 0.58, 'train_loss': 0.8153044879436493, 'global_step': 20} epoch 10 evaluation each step {'eval_loss': 0.6604576451437814, 'eval_accuracy': 0.58, 'train_loss': 0.8333011567592621, 'global_step': 2} {'eval_loss': 0.6526826364653451, 'eval_accuracy': 0.58, 'train_loss': 0.8114463090896606, 'global_step': 4} {'eval_loss': 0.6567909887858799, 'eval_accuracy': 0.58, 'train_loss': 0.6695355176925659, 'global_step': 6} {'eval_loss': 0.6620746084621975, 'eval_accuracy': 0.62, 'train_loss': 0.6175626814365387, 'global_step': 8} {'eval_loss': 0.6602040699550084, 'eval_accuracy': 0.52, 'train_loss': 0.5784901082515717, 'global_step': 10} {'eval_loss': 0.667422890663147, 'eval_accuracy': 0.54, 'train_loss': 0.5177579522132874, 'global_step': 12} {'eval_loss': 0.6945722614015851, 'eval_accuracy': 0.52, 'train_loss': 0.5649124383926392, 'global_step': 14} {'eval_loss': 0.7062868390764508, 'eval_accuracy': 0.48, 'train_loss': 0.7067148089408875, 'global_step': 16} {'eval_loss': 0.7712458883013044, 'eval_accuracy': 0.46, 'train_loss': 0.7441326081752777, 'global_step': 18} {'eval_loss': 1.5845262237957545, 'eval_accuracy': 0.42, 'train_loss': 1.0378091633319855, 'global_step': 20} Repeatability is crucial in machine learning, please help with this issue. Thank you very much. The code sample dataset is here for SST tasks: [huggingface_bert_issue2.zip](https://github.com/huggingface/pytorch-pretrained-BERT/files/3077188/huggingface_bert_issue2.zip)
04-12-2019 00:59:40
04-12-2019 00:59:40
There is some non-determinism in cuDNN. Try setting `torch.backends.cudnn.deterministic = True` in your code: with that plus the RNG seeding, you should be able to get deterministic results.<|||||>Yes go with @Rocketknight1 suggestion. Also check that you set model in eval mode to disable the DropOut modules before evaluating.<|||||>Hi, thank you for you guys reply. I simplify the model to a simple MNIST problem to check if there are similar phenomena. I also posted in pytorch/examples github, but no one replies. I found that it seems that it is PyTorch's problem. Could you please take a look at these posts? [mnist](https://github.com/pytorch/examples/issues/542), in these post, Yes, you are right `torch.backends.cudnn.deterministic = True` could bring some consistency to the result. However, even though you set `torch.backends.cudnn.deterministic = True`, STILL, when you compare evaluate at each epoch and only make final evaluation gives different results, you could see in this post: [minis final evaluation VS evaluation each epoch](https://github.com/pytorch/examples/issues/543) What would you think? Is it indeed pyTorch's problem? I have code attached in the post for your convenience to run. Thank you very much. <|||||>> There is some non-determinism in cuDNN. Try setting `torch.backends.cudnn.deterministic = True` in your code: with that plus the RNG seeding, you should be able to get deterministic results. Hi, thank you for your reply, when I post this issue, I have searched a lot. In the code I have already set the random seeding like this: ``` use_cuda = not args.no_cuda and torch.cuda.is_available() # set seed random.seed(args.seed) np.random.seed(args.seed) torch.manual_seed(args.seed) if use_cuda: torch.cuda.manual_seed_all(args.seed) # if got GPU also set this seed ``` and also put `torch.backends.cudnn.deterministic = True` at the front of the script. But the behavior is still the same. What do you think of this mystic difference? My this post shows the detail and have code attached to have a try: [minis final evaluation VS evaluation each epoch](https://github.com/pytorch/examples/issues/543) <|||||>> Yes go with @Rocketknight1 suggestion. > Also check that you set model in eval mode to disable the DropOut modules before evaluating. Thank you for your reply. I have checked that, in my original code. In train() and test() fuction, I already set model.train() and model.eval() respectively. But the behavior is still the same. What do you think of this mystic difference? My this post shows the detail and have code attached to have a try: [minis final evaluation VS evaluation each epoch](https://github.com/pytorch/examples/issues/543)<|||||>This is really a PyTorch/CUDA issue rather than an issue with this repo, but there is more information here: https://pytorch.org/docs/stable/notes/randomness.html You can also try setting `torch.backends.cudnn.benchmark = False`<|||||>@Rocketknight1 Thank you for your reply. I also tried to add `torch.backends.cudnn.benchmark=False`, the behavior is still the same as above. Yeah, it seems to be a PyTorch problem. <|||||>The #question channel on the PyTorch slack is a good place to get quick answers on these stuff. PyTorch's forum is also great.
transformers
474
closed
Fix tsv read error in Windows
The initial version suffers from the error of `UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position` when loading the `.tsv` filein **Windows** System, as indicated in https://github.com/huggingface/pytorch-pretrained-BERT/issues/52 It is solved by adding `encoding='utf-8'` when reading the `.tsv` file.
04-11-2019 19:47:52
04-11-2019 19:47:52
Ok, thanks @jiesutd!
transformers
473
closed
GPT as a Language Model
I am interested to use GPT as Language Model to assign Language modeling score (Perplexity score) of a sentence. Here is what I am using ```import torch import math from pytorch_pretrained_bert import OpenAIGPTTokenizer, OpenAIGPTModel, OpenAIGPTLMHeadModel # Load pre-trained model (weights) model = OpenAIGPTLMHeadModel.from_pretrained('openai-gpt') model.eval() # Load pre-trained model tokenizer (vocabulary) tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt') def score(sentence): tokenize_input = tokenizer.tokenize(sentence) tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)]) loss=model(tensor_input, lm_labels=tensor_input) return math.exp(loss) a=['there is a book on the desk', 'there is a plane on the desk', 'there is a book in the desk'] print([score(i) for i in a]) 21.31652459381952, 61.45907380241148, 26.24923942649312 ``` Is it the right way to score a sentence ?
04-11-2019 18:02:25
04-11-2019 18:02:25
Looks good to me. It's perplexity so lower is better. You can do a `math.exp(loss.item())` and call you model in a `with torch.no_grad()` context to be a little cleaner.<|||||>Oh no wait, you need to compare to the shifted inputs: `loss=model(tensor_input[:-1], lm_labels=tensor_input[1:])` It's a causal model, it predicts the next token given the previous ones.<|||||>I can see inside the ```class OpenAIGPTLMHeadModel(OpenAIGPTPreTrainedModel)``` this shifting is happening ``` if lm_labels is not None: # Shift so that tokens < n predict n shift_logits = lm_logits[:, :-1].contiguous() shift_labels = lm_labels[:, 1:].contiguous() ``` Do I still need to use ```loss=model(tensor_input[:-1], lm_labels=tensor_input[1:])```<|||||>Oh you are right, this has been added now with #404. So the way you are doing looks fine to me. Shifting the logics inside the model can a bit dangerous for the people who are used to train a causal model the usual way, I'll add a mention in the README.<|||||>Thanks for your quick response. I can see there is a minor bug when I am trying to predict with a sentence which has one word. You can re create the error by using my above code. ``` sentence='Learn' score(sentence) ```<|||||>Unfortunately, given the way the model is trained (without using a token indicating the beginning of a sentence), I would say it does not make sense to try to get a score for a sentence with only one word.<|||||>How can we use this to get the probability of a particular token? So, for instance, let's say we have the following sentence. "He was going home" and we want to get the probability of "home" given the context "he was going" like in GLTR tool by harvard nlp @thomwolf <|||||>@thomwolf If the shifting of the lm_labels matrix isn't necessary (before passing into the model's forward method) because of the internal logit shifting, should the preprocess code for finetuning GPT1 in RocStories be changed? At [https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_openai_gpt.py#L86](url), I believe the continuations are shifted over in lm_labels one relative to input_ids.<|||||>Oh yes, of course! Do you want to submit a PR on that? Otherwise I'll take of it later.<|||||>For sure! Will look at it soon. On Thu, Apr 25, 2019 at 11:33 PM Thomas Wolf <[email protected]> wrote: > Oh yes, of course! Do you want to submit a PR on that? Otherwise I'll take > of it later. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-pretrained-BERT/issues/473#issuecomment-486942939>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AC6UQICJ3ROXNOJXROIKYN3PSKO4LANCNFSM4HFJZIVQ> > . > <|||||>@thomwolf Hey how can I give my own checkpoint files to the model while loading <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Is this score normalized on sentence lenght? And if not, what do I need to change to normalize it? <|||||>I'm confused whether the right way to calculate the perplexity for GPT2 is what the OP has done or as per the documentation https://huggingface.co/transformers/perplexity.html? Or both are equivalent for some value of the stride?
transformers
472
closed
Compilation terminated
Hi, I cannot pip install the package. I have > regex_3/_regex.c:48:10: fatal error: Python.h: No such file or directory #include "Python.h" ^~~~~~~~~~ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 Might have something to do with CI failure for the new merge.
04-11-2019 09:58:42
04-11-2019 09:58:42
managed to install it by running > sudo apt-get install python3 python-dev
transformers
471
closed
modeling_openai.py bug report
line 651 has a potential bug ``` # Copy word and positional embeddings from the previous weights self.tokens_embed.weight.data[: self.config.vocab_size, :] = old_embed.weight.data[: self.config.vocab_size, :] self.tokens_embed.weight.data[-self.config.n_positions :, :] = old_embed.weight.data[-self.config.n_positions :, :] ``` isn't it supposed to be `self.positions_embed ` instead of `self.tokens_embed` ?
04-11-2019 07:57:17
04-11-2019 07:57:17
Good catch, I'll fix this.
transformers
470
closed
how to correctly do classifying?
when I do the classifying with task cola, the result is like this? eval_loss = 0.0 global_step = 49173 loss = 0.0 mcc = 0.0 what is my prediction result? and how should I use the output model????
04-10-2019 18:08:05
04-10-2019 18:08:05
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I'm having a similar issue where I run a BERT tutorial by Chris McCormick and receive 65-78% accuracy on training data and 0.0% for test data. Was anyone able to diagnose a result of 0.0%? One theory I have is a division by zero error for very small numbers that are rounded to zero...
transformers
469
closed
Why can`t we just use cached pytorch-model without internet
Sorry for my Impolite words. I am very appreciate huggingface team for the excellent repo and my original intention is to provide some suggestion to great repo to help more people like me, forgive me for being rude.
04-10-2019 17:08:57
04-10-2019 17:08:57
First, yes you can. You should be able to download the model(whether use shell command or python downloader) into a specified directory and modify the loading lines. That works for me. Second, please be appreciated to these fantastic works huggingface team pulled off. It's not their fault that you don't have stable access, so help yourself.<|||||>Humm wording apart, maybe we could relax the internet connection check indeed, I'll have a look.<|||||>Sincerely thank you for your reply, you are the great contributors, I apologize again for my inappropriate expression, mainly because I can`t properly solve the problem till midnight. I just want to make my own suggestion for people encountered this like me, so I opened this issue. Actually I am very grateful for your work, and appreciate the open source community. Best regards!<|||||>Ok, network connection check has been relaxed in the now merged #500. It will be included in the next PyPI release (probably next week). In the meantime you can install from `master`.<|||||>So nice you are,I cannot continue my research without pytorch-pretrained-BERT,Thank you for your Incredible devotion.With my sincere apology.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
468
closed
GPT-2 fine tunning
I wonder if GPT-2 model has some examples of how to do fine tuning like GPT. The DoubleHeadsModel interface of GPT-2 looks similar to GPT. But there's no special token handler for GPT-2 tokenizer. Is that necessary?
04-10-2019 08:01:01
04-10-2019 08:01:01
What kind of task are you fine-tuning on? I If it's something like ROCstories task, you need the extra tokens. I think people are doing BERT for up-stream tasks because birectional context gives better results than left-to-right<|||||>Hi @yaroslavvb, I am mostly focusing on classification tasks(like ROCstories as you mentioned). I just want to confirm that the special tokens used by GPT and GPT2 should be treated the same(add to the end of the vocabulary and feed in along with the original text). We would like to run both BERT and GPT2 which are the two SOTA models, since they shine in different ways. I will be happy to submit a pull request if this is a valid extension to the code base or if anyone is interested. <|||||>Indeed we should probably add additional embeddings for GPT-2 also. I'll give it a look, should be pretty easy to add.<|||||>I just tried implementing the changes suggested in order to make gpt-2 amenable to being finetuned on the Cloze Story Task from ROCStories. However, my eval-accuracy seems to be topping out at 68%. Is this what others are getting?<|||||>I also want to fine tune gpt-2 for qa and run it on squad. I am new to the field. Should I be following BertForQuestionAnswering and run BERT or SQuAD as a model to do the same for gpt-2? <|||||>Hi @rohuns, I met the same situation, the performance of the pre-trained GPT-2 with extra task head is poor on both ROC and sts. I don't truly know the reason. My hypothesis is GPT-2 was not trained in the same "multi-task" fashion as the GPT1, therefore adding the special token will destroy the model or at least having hard time generating good representation for the sentence for the downstream tasks. I hope someone can get the fine-tuning work to disprove the above hypothesis. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
467
closed
Update README.md
Fix for ```> > > > 04/09/2019 21:39:38 - INFO - __main__ - device: cuda n_gpu: 1, distributed training: False, 16-bits training: False Traceback (most recent call last): File "/home/ubuntu/pytorch-pretrained-BERT/examples/lm_finetuning/simple_lm_finetuning.py", line 642, in <module> main() File "/home/ubuntu/pytorch-pretrained-BERT/examples/lm_finetuning/simple_lm_finetuning.py", line 502, in main raise ValueError("Training is currently the only implemented execution option. Please set `do_train`.") ValueError: Training is currently the only implemented execution option. Please set `do_train`. ```
04-09-2019 21:46:03
04-09-2019 21:46:03
Thanks Yaroslav!
transformers
466
closed
Mismatch in pre-processed wikitext-103 corpus and using pre-trained tokenizer for TransfoXLLMHeadModel
In `examples/run_transfo_xl.py`, the pre-processed wikitext-103 corpus is loaded using: `corpus` = TransfoXLCorpus.from_pretrained(args.model_name) ` Example of pre-processed batch converted to tokens: > ['<eos>', '=', 'Homarus', 'gammarus', '=', '<eos>', '<eos>', 'Homarus', 'gammarus', ',', 'known', 'as', 'the', 'European', 'lobster', 'or', 'common', 'lobster', ',', 'is', 'a', 'species', 'of', 'clawed', 'lobster', 'from', 'the', 'eastern', 'Atlantic', 'Ocean', ',', 'Mediterranean', 'Sea', 'and', 'parts', 'of', 'the', 'Black', 'Sea', '.', 'It', 'is', 'closely', 'related', 'to', 'the', 'American', 'lobster', ',', 'H.', 'americanus', '.', 'It', 'may', 'grow', 'to', 'a', 'length', 'of', '60'] Evaluating the TransfoXLLMHeadModel model on this corpus gives a ppl of ~18. However when I use the pre-trained `TransfoXLTokenizer` for wikitext-103 using: `tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')`, there is a mismatch in the tokenizations. Example of using pre-trained tokenizer to tokenize wikitext-103: > ['<eos>', '=', 'Homarus', 'gammarus', '=', '<eos>', '<eos>', 'Homarus', 'gammarus', ',', 'known', 'as', 'the', 'European', 'lobster', 'or', 'common', 'lobster', ',', 'is', 'a', 'species', 'of', 'clawed', 'lobster', 'from', 'the', 'eastern', 'Atlantic', 'Ocean', ',', 'Mediterranean', 'Sea', 'and', 'parts', 'of', 'the', 'Black', 'Sea', '.', 'It', 'is', 'closely', 'related', 'to', 'the', 'American', 'lobster', ',', 'H', '.', 'americanus', '.', 'It', 'may', 'grow', 'to', 'a', 'length', 'of'] Here, `H.` is being split, whereas the pre-processed version has it as a single token. Evaluating the TransfoXLLMHeadModel model on this version of the corpus gives a ppl of ~29. Could you please help me understand why there is a mismatch?
04-09-2019 20:20:15
04-09-2019 20:20:15
You are right, thanks for catching this. This behavior is inherited from the BERT Tokenizer but the TransformerXL Tokenizer should behave differently ([this line](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization_transfo_xl.py#L316) which split on punctuation is not present in the original tokenizer of Transformer XL [here](https://github.com/kimiyoung/transformer-xl/blob/master/tf/vocabulary.py#L25-L42)). I'll check there no other differences, add a test on this and fix this in the next release.<|||||>@thomwolf Hello thomwolf, first of all very ty for your library. I'm reproducing transformer-xl benchmark performance using hugginface Tokenizer, but I think there is still mismatch. I just want to ask you if its solved. i would very much appreciated if you reply this :)
transformers
465
closed
Errors when using Apex
Using `pytorch-pretrained-BERT` with `apex` installed breaks with the errors below. I am using it through allennlp, and my environment settings are: ``` ubuntu 16.04 nvidia driver 410.48 4 Titan V gpus python 3.6.8 cuda 9 pytorch 1.0.1.post2 pytorch-pretrained-bert 0.6.1 ``` I also tried python 3.7, cuda 10 and nvidia driver 390 with the same errors. At training time, the crashes from the first batch with the following error: ``` File "/home/beltagy/miniconda3/lib/python3.7/site-packages/allennlp/modules/text_field_embedders/basic_text_field_embedder.py", line 110, in forward token_vectors = embedder(*tensors) File "/home/beltagy/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/beltagy/miniconda3/lib/python3.7/site-packages/allennlp/modules/token_embedders/bert_token_embedder.py", line 91, in forward attention_mask=util.combine_initial_dims(input_mask)) File "/home/beltagy/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/beltagy/miniconda3/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 711, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/home/beltagy/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/beltagy/miniconda3/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 263, in forward embeddings = self.dropout(embeddings) File "/home/beltagy/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/beltagy/miniconda3/lib/python3.7/site-packages/torch/nn/modules/dropout.py", line 58, in forward return F.dropout(input, self.p, self.training, self.inplace) File "/home/beltagy/miniconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 749, in dropout else _VF.dropout(input, p, training)) RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /opt/conda/conda-bld/pytorch_1549636813070/work/aten/src/THC/THCGeneral.cpp:405 ``` At prediction time, it crashes with the following error after exactly 13 batches (even when I shuffle the data or change the batch size) ``` File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/allennlp/modules/text_field_embedders/basic_text_field_embedder.py", line 110, in forward token_vectors = embedder(*tensors) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/allennlp/modules/token_embedders/bert_token_embedder.py", line 91, in forward attention_mask=util.combine_initial_dims(input_mask)) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 714, in forward output_all_encoded_layers=output_all_encoded_layers) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 396, in forward hidden_states = layer_module(hidden_states, attention_mask) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 381, in forward attention_output = self.attention(hidden_states, attention_mask) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 339, in forward self_output = self.self(input_tensor, attention_mask) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 290, in forward mixed_query_layer = self.query(hidden_states) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 67, in forward return F.linear(input, self.weight, self.bias) File "/home/beltagy/miniconda3/envs/cuda9/lib/python3.6/site-packages/torch/nn/functional.py", line 1354, in linear output = input.matmul(weight.t()) RuntimeError: cublas runtime error : an internal operation failed at /opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THC/THCBlas.cu:258 ``` Any thoughts what might be wrong, or how I can debug this ?
04-09-2019 15:17:26
04-09-2019 15:17:26
Can you try to run it with `CUDA_LAUNCH_BLOCKING=1` so we can see which exact CUDA call fails? Also, do you have a simple way for me to try to reproduce this error?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
464
closed
How to get vocab.txt and bert_config.json as output of fine tuning?
Hi, I am fine tuning bert on custom data. As output I am getting only pytorch_model.bin but how to get updated vocab.txt and bert_config.json . Please suggest. Thanks Mahesh
04-09-2019 10:03:22
04-09-2019 10:03:22
You can get `pytorch_model.bin` and `config.json` just as indicated in the examples: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L861-L866 The vocabulary stays the same, just load the tokenizer as you did for the training (`BertTokenizer.from_pretrained(...)`)<|||||>Thanks Thomas. I really appreciate it. Our custom text might have some new words which may not be in original vocab.txt Original vocab.txt contain some unused placeholders. (some 993) out of total 30522 Please let me know your thoughts on this. Thanks Mahesh<|||||>Then you should go have a look at https://github.com/huggingface/pytorch-pretrained-BERT/issues/463. And I will close this issue in favor of #463 ;-)
transformers
463
closed
Vocab changes in lm_finetuning in BERT
I want to use lm_finetuning for BERT. A potential issue is vocab_size. Since I'm using Hinglish data (Hindi text written using English Alphabets) there can be new words which are not present in English vocabulary. According to BERT doc... > If using your own vocabulary, make sure to change vocab_size in bert_config.json. If you use a larger vocabulary without changing this, you will likely get NaNs when training on GPU or TPU due to unchecked out-of-bounds access. How do I do this?
04-09-2019 00:41:19
04-09-2019 00:41:19
Hi, Also it should produce vocab.txt and bert_config.json along with pytorch_model.bin. How you are getting those?<|||||>We did this for [SciBERT](https://github.com/allenai/scibert), and you might find this discussion useful https://github.com/allenai/scibert/issues/29<|||||>lm_finetuning produce pytorch_model.bin alone (and not bert_config.json) what do you think @Rocketknight1 ?<|||||>``` Model name '../../models/bert/' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed '../../models/bert/vocab.txt' was a path or url but couldn't find any file associated to this path or url. ``` Yet your fine-tuning script does not produce any such file.<|||||>The `lm_finetuning` script assumes you're using one of the existing models in the repo, that you're fine-tuning it for a narrower domain in the same language, and that the saved pytorch_model.bin is basically just updated weights for that model - it doesn't support changes in vocab. Altering the vocab and config would probably require more extensive retraining of the model, possibly from scratch, which this repo isn't supporting yet because of the requirement for TPUs to do it quickly enough. I can contribute code if @thomwolf thinks it's relevant, but I'm not sure if or how we should be supporting this use-case right now. It might have to wait until we add from-scratch training and TPU support.<|||||>Hi @Rocketknight1 Does this mean that the [pytorch_BERT](https://github.com/huggingface/pytorch-pretrained-BERT/) and also the [google_BERT](https://github.com/google-research/bert) implementation do not support *finetuning* with new vocabulary, sentences respectively? I would like to train a german model on a domain-specific text: the amount of german words in the multilingual model is relatively small and so I cannot access hidden states for out-of-vocabulary words even when using synonyms generated using [*FastText*](https://radimrehurek.com/gensim/models/fasttext.html), as also those synonyms are out of vocabulary. Is there any suggestion you can give me to alleviate this problem? I see that [*Issue 405*](https://github.com/huggingface/pytorch-pretrained-BERT/issues/405) has some suggestions together with [*Issue 9*](https://github.com/google-research/bert/issues/9): Can I really achieve my goal by appending my vocabulary to the end of *vocab.txt* and adjusting the *config.json* accordingly? Do I need to use the [google bert model](https://github.com/google-research/bert) and subsequently [convert_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py) or can I use this repo somehow directly?<|||||>Hi @gro1m Any luck with adding vocab to bert_pytorch?<|||||> > Hi @gro1m > Any luck with adding vocab to bert_pytorch? I'm using https://github.com/kwonmha/bert-vocab-builder to build Vocab. Will share experience. <|||||>Hi, I am trying to use SciBert, the version with it's own vocab. I am wondering how to point to that vocab.txt file, and not the original. Edit Found the answer https://github.com/huggingface/pytorch-transformers/issues/69#issuecomment-443215315 you can just do a direct path to it<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@bhoomit To achieve BERT level results for Hinglish. would fine-tuning BERT English model with Hinglish data(approx 200 MB) could achieve good results? or it would be best to train the model from scratch in case of hinglish ?
transformers
462
closed
fix run_gpt2.py
Before this PR, unconditional sample generation fails silently. Fixing the loop reveals a reference before assignment error.
04-09-2019 00:24:04
04-09-2019 00:24:04
Fixes #412 <|||||>Ok, looks good to me, thanks!
transformers
461
closed
Pooler weights not being updated for Multiple Choice models?
I'm trying use pretrained BERT to finetune on a multiple choice dataset. The parameters from `pooler` are excluded from the optimizer params [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L1044-L1047), however, the MutlipleChoice model does indeed use `pooled_output` (which passes through the `pooler`) [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_swag.py#L392). I wasn't able to find a similar exclusion of `pooler` params from the optimizer in the official repo. I think I'm missing something here. Thanks for your patience.
04-08-2019 22:29:35
04-08-2019 22:29:35
Indeed this looks like a bug in the `run_swag.py` example. What do you think @rodgzilla? Isn't the exclusion of the pooler parameters from optimization ([line 392 of `run_swag.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_swag.py#L392)) a typo?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@thomwolf - This surely looks like a bug, should I just send a hotfix and close it?<|||||>Yes!<|||||>Fixed in #675.
transformers
460
closed
run_classifier on CoLA fails with illegal memory access
I am trying to run the run_classifier.py script against the CoLA task as a smoke test to make sure I have everything installed correctly. However, when I run `CUDA_LAUNCH_BLOCKING=1 python run_classifier.py --task_name CoLA --do_train --do_eval --do_lower_case --data_dir /workspace/glue/CoLA/ --bert_model bert-base-uncased --max_seq_length128 --train_batch_size 8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/CoLA` I get the following stack trace: `THCudaCheck FAIL file=/tmp/pip-req-build-k527kqpu/aten/src/ATen/native/cuda/Embedding.cu line=340 error=700 : an illegal memory access was encountered Traceback (most recent call last): File "run_classifier.py", line 669, in <module> main() File "run_classifier.py", line 581, in main loss.backward() File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 106, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: cuda runtime error (700) : an illegal memory access was encountered at /tmp/pip-req-build-k527kqpu/aten/src/ATen/native/cuda/Embedding.cu:340` before even a single batch finishes. I'm running on a Tesla P4 out of NVidia's official pytorch container. I installed pytorch-pretrained-BERT using pip. The only thing that I have changed from the example on the doc page is reducing the batch size from 32 to 8 on account of the P4's lower memory. I wasn't able to check if it works in CPU mode due to [this](https://github.com/huggingface/pytorch-pretrained-BERT/issues/150). If there is any other useful information that I can provide, let me know.
04-08-2019 18:01:30
04-08-2019 18:01:30
My guess is that the inputs are either larger than the maximum input size of the model (512) or outside the vocabulary (larger than the vocabulary size). Do you think you can try to check this?<|||||>@ananyahjha93 did the implementation of the additional GLUE tasks. Maybe he has some additional insights on this.<|||||>@prematurelyoptimized I have not been able to re-create this error with a batch size of 8 on CoLA. I will post an update if I find something on this. In the meantime, is it possible for you to check this code on a different GPU? Also, can you post the version of PyTorch you are using? <|||||>My apologies for the delay, I have been out of town. @ananyahjha93 Unfortunately, I only have the one GPU for testing. print(torch.__version__) returns 1.1.0a0+be364ac @thomwolf I didn't change the tokenizer at all, so it shouldn't be an out-of-vocabulary problem and I didn't change either convert_examples_to_features or _truncate_seq_pair, so it shouldn't be a problem with the input being too long. I can, however, dig in and verify those assumptions. Unfortunately, my IT group installed a new NVidia license and now torch reports cuda being unavailable. I should still be able to check the OOV and input overflow problems, but any testing on the GPU I will have to put on hold until we get that issue sorted out.<|||||>Ok, I am back up and running. Neither I nor my IT folks know what was causing the issue with cuda being unavailable (It was an error 35, indicating a newer runtime than the driver, but it was the same runtime and driver that were working before). In any case, upgrading to CUDA 10.1 and driver version 418.40.04 resolved the unavailability issue after rebuilding the container for the new environment. So now I can start addressing the illegal memory access issue again (sadly, it was not resolved with the driver update) I added a validation check before training to ensure that the tokenizer could convert the ids back into tokens and the length of the training samples didn't exceed the maximum length. All the checks passed, so that should rule out OOV and input overflow. I also noticed that the illegal memory access in non-deterministic. One example stack is the following: Traceback (most recent call last): | 0/1069 [00:00<?, ?it/s] File "run_classifier2.py", line 677, in <module> main() File "run_classifier2.py", line 580, in main loss = model(input_ids, segment_ids, input_mask, label_ids) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 970, in forward _, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 711, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 261, in forward embeddings = words_embeddings + position_embeddings + token_type_embeddings RuntimeError: CUDA error: an illegal memory access was encountered However, running the same command immediately afterward gives Traceback (most recent call last): | 0/1069 [00:00<?, ?it/s] File "run_classifier2.py", line 677, in <module> main() File "run_classifier2.py", line 580, in main loss = model(input_ids, segment_ids, input_mask, label_ids) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 970, in forward _, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 714, in forward output_all_encoded_layers=output_all_encoded_layers) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 396, in forward hidden_states = layer_module(hidden_states, attention_mask) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 382, in forward intermediate_output = self.intermediate(attention_output) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 354, in forward hidden_states = self.dense(hidden_states) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 67, in forward return F.linear(input, self.weight, self.bias) File "/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 1401, in linear output = input.matmul(weight.t()) RuntimeError: cublas runtime error : the GPU program failed to execute at /tmp/pip-req-build-k527kqpu/aten/src/THC/THCBlas.cu:259 And then again, Traceback (most recent call last): File "run_classifier2.py", line 677, in <module> main() File "run_classifier2.py", line 580, in main loss = model(input_ids, segment_ids, input_mask, label_ids) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 970, in forward _, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 714, in forward output_all_encoded_layers=output_all_encoded_layers) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 396, in forward hidden_states = layer_module(hidden_states, attention_mask) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 381, in forward attention_output = self.attention(hidden_states, attention_mask) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 339, in forward self_output = self.self(input_tensor, attention_mask) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 305, in forward attention_probs = nn.Softmax(dim=-1)(attention_scores) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/activation.py", line 877, in forward return F.softmax(input, self.dim, _stacklevel=5) File "/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 1256, in softmax ret = input.softmax(dim) RuntimeError: cuda runtime error (700) : an illegal memory access was encountered at /tmp/pip-req-build-k527kqpu/aten/src/ATen/native/cuda/SoftMax.cu:545 This feels like an OOM error, but watching nvidia-smi, the memory usage peaks at only about 800MiB. This doesn't change whether my batch size is 2, 4, 8, or 32, so it looks like the batch size is not the culprit either. The non-determinism could be caused by the kernel running asynchronously, but that shouldn't happen since I'm running with CUDA_LAUNCH_BLOCKING=1. Looking at the pytorch source at the lines in the stack traces hasn't been illuminating, so are there some recommended debugging steps at this point?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
459
closed
Question about BertForQuestionAnswering model
Hi, I want to train `BertForQuestionAnswering` with my own dataset. I have already taken care of the format of the dataset, however, I encounter some problem when I want to do the inference job. If I am right, the forward output of the model is `start_logits` and `end_logits`. When I use `batchsize=1`, the output shape of both `start_logits` and `end_logits` is `(1, 384)`, with the condition that `max_seq_length=384`. And the content of the `start_logits` looks like this. `tensor([[-146.7531, -124.1240, 190.8818, 47.8019, -188.0909, 47.2568, 47.7160, 190.8021, 48.0589, 190.8891, -88.2698, -90.5676, -86.6798, -85.8938, -83.8673, -86.1693, -149.1830, 190.8824, 48.2365, 47.0692, 48.6216, 47.5917, 190.8883, -88.3360, -141.6174, -188.1515, -188.1540, -53.4689, 46.8519, -188.1528, -53.4642, 47.6210, 44.1266, 190.4762, 47.4085, -53.7023, 46.8320, 46.7578, -188.1541, -53.4671, 47.3207, 47.8246, -188.1536, -53.7048, 46.7347, -188.1540, 45.1644, 47.1797, -53.7898, 47.3539, 190.8893, 46.2768, 44.3791, 47.1278, -188.1544, 43.0797, 46.6862, 45.7719, 190.8861, 46.1606, 47.4132, -188.1537, 36.1658, 47.0910, 48.5226, 46.2841, 46.6541, 47.2510, 46.7009, 47.6877, 47.6154, 44.2239, -187.5396, -53.2560, 47.2695, 46.6610, 48.0983, -169.4403, -187.6610, 45.5523, 42.2900, -188.1411, 190.2228, 46.8949, 190.8703, 45.7234, 46.2740, 47.1369, -188.1532, -54.4368, 47.4453, 47.8158, 190.8736, 48.5765, 46.3732, 47.6971, 190.8648, 47.9480, 190.8680, 190.8193, 47.7762, 190.8790, 47.3178, 47.6414, 190.8697, 47.4176, 190.8666, 46.2278, -188.1528, 44.4261, 47.4981, 44.7491, 47.1290, 47.4565, -53.6217, 47.1531, -188.1532, -53.8363, 190.8584, 190.8332, 47.3138, -181.1877, 47.6136, 190.8501, 189.9186, 47.7481, 190.8734, 46.6971, 47.2033, 47.2970, 46.3130, -188.1545, 46.3114, 46.7054, 46.5795, 190.8862, 46.9812, 47.3662, 47.1590, 45.7121, 47.1540, 46.6390, 47.0824, 46.8635, 47.3509, 47.0953, 46.6325, 190.8554, 46.4527, 45.5026, 25.4061, 190.8870, 46.7979, 47.2999, 45.9724, 47.1344, 46.6824, 190.8658, 47.0369, 46.7866, 190.8427, 47.2813, 46.6452, 47.3353, 46.7846, 190.8449, 47.2189, 45.9533, -188.1514, 47.8233, 190.8823, 47.5980, 47.1329, 48.1479, 47.0462, 45.7567, 37.7330, 47.4664, 190.8853, 47.6274, 47.3734, 190.8493, 45.9364, 46.3813, 47.6914, 46.7994, 47.1847, 46.3399, 47.4068, 47.3856, 190.8252, 46.8805, 190.8525, 48.3069, 46.0178, 47.0109, 47.4094, 47.7603, 190.8745, 190.8543, 190.8878, 190.8569, 190.8248, 190.8730, 190.8667, 190.8854, 190.8804, 47.4608, 190.8710, 190.8739, 190.8552, 190.8687, 190.8698, 190.8622, 190.8847, 190.8673, 190.8881, 48.5061, 190.8691, 190.8752, 190.8887, 190.8773, 190.8719, 190.8861, 190.8807, 190.8661, 190.8884, 47.6262, 47.9787, 190.8649, 190.8735, 190.8645, 190.8862, 190.8828, 190.8752, 190.8871, 190.8723, 47.5396, 190.8626, 190.8553, 190.8613, 190.8476, 190.8685, 190.8499, 190.8742, 190.8689, 190.8845, 190.8497, 47.2941, 190.8490, 190.8656, 190.8815, 190.8670, 190.8566, 190.8673, 190.8823, 190.8686, 190.8822, 190.8694, 190.8720, 190.8715, 190.8867, 48.3389, 190.8861, 47.6601, 190.8593, 190.8729, 48.1891, 190.8771, 48.9684, 190.8796, 190.8530, 190.8772, 190.8604, 190.8855, 190.8657, 190.8313, 190.8068, 47.2093, 45.8342, 190.8693, 47.0122, 190.8611, 47.9197, 47.5900, 190.8699, 47.1812, 47.0453, 47.8088, 47.3929, 190.8647, 47.4646, 190.8573, 47.6600, 190.8750, 49.9670, 48.0235, 190.8890, 47.1344, 46.9663, 47.8146, 47.5412, 47.8123, 47.0196, 190.8707, -188.1540, 45.6518, -53.6818, 47.1032, -188.1543, 47.6412, 190.8764, -53.6656, 47.0526, 46.7920, 47.4910, 46.0659, 190.8813, 47.5103, 44.4868, 47.5182, 46.9884, 48.5096, 47.3874, 46.8767, 190.8836, 47.0396, 47.2361, 47.2189, 8.5557, 190.7647, 47.0486, 47.3513, 46.9460, 49.2101, 47.4387, 47.0915, 190.8861, 46.8415, 46.8162, -188.1539, -188.1529, 47.7122, -51.9730, 47.0126, 47.7990, 47.2502, 47.2899, 45.7776, 47.0698, 47.1549, 47.5327, 47.0973, 47.0441, -52.2246, 47.0743, 46.0908, -141.6221, 47.2081, 47.1517, 47.2605, 47.1618, 47.2170, 47.2807, 47.2537, 47.2298, 47.3016, 47.3111, 47.2860, 47.2685, 47.3075, 47.2483, 47.2479, 47.2315, 47.1936, 47.2113, 47.2494, 47.2810, 47.2674, 47.3000, 47.3290, 47.3196, 47.2996, 47.3611]], grad_fn=<SqueezeBackward1>)` Is this behaving normally? Shouldn't it a `(1, 1)` tensor?
04-08-2019 15:47:53
04-08-2019 15:47:53
transformers
458
closed
Suggestion: add warning when using BertForSequenceClassification without special [CLS] token
Thank you for the awesome package! As I understand it right now, it is the user's responsibility to add the special `CLS` and `SEP` tokens. People who haven't read the paper might miss this detail. It would be nice to issue a warning in the tokenizer or the model itself if the input is missing these tokens. An alternative is to include it in a more prominent location in the docs. I understand that the package needs to be flexible so multiple architectures and tokenization procedures can be implemented, so I'm unsure where would be the best place for such a notification. In short, my suggestion: # GIVEN A text sequence without the `CLS` tag # WHEN `BertForSequenceClassification` is called # THEN A warning is logged
04-08-2019 11:35:06
04-08-2019 11:35:06
I understand the issue but I'm not sure this would be very easy to implement as we would like to keep the model and tokenizer separated one from the other. Do you have a solution in mind? Otherwise, I'll guess people will have to continue to read the paper before using the model... 😉<|||||>I totally understand the need for separation between tokenizer and model. But, the specific `BertForSequenceClassification` does have the assumption and expects the input to be in a specific form. Even the docs mention "look at the preprocessing logic". I'd suggest extending the documentation for the model - sth like https://github.com/huggingface/pytorch-pretrained-BERT/pull/480
transformers
457
closed
Load Biobert pre-trained weights into Bert model with Pytorch bert hugging face run_classifier.py code
These are the steps I followed to get Biobert working with the existing Bert hugging face pytorch code. 1. I downloaded the pre-trained weights 'biobert_pubmed_pmc.tar.gz' from [the Releases page](https://github.com/naver/biobert-pretrained/releases). 2. I ran this command to convert the tf checkpoint to pytorch model ``` python pytorch-pretrained-BERT/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py --tf_checkpoint_path="biobert/pubmed_pmc_470k/biobert_model.ckpt.index" --bert_config_file="biobert/pubmed_pmc_470k/bert_config.json" --pytorch_dump_path="biobert/pubmed_pmc_470k/Pytorch/biobert.model" ``` This created a file 'biobert.model' in the specified path. 3. As mentioned in this [link](https://modelzoo.co/model/pytorch-pretrained-bert) , I compressed 'biobert.model' created above and 'biobert/pubmed_pmc_470k/bert_config.json' together into a biobert_model.tar.gz 3. I then ran the run_classifier.py of hugging face bert with the following command, using the tar.gz created above. ``` python pytorch-pretrained-BERT/examples/run_classifier.py --data_dir="Data/" --bert_model="biobert_model.tar.gz" --task_name="qqp" --output_dir="OutputModels/Pretrained/" --do_train --do_eval --do_lower_case ``` I get the error ``` 'UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte' ``` in the line ``` tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case) ``` Am I doing something wrong? I just wanted to run run_classifier.py code provided by hugging face with biobert pretrained weights in the same way that we run bert with it. Is there a way to do this?
04-08-2019 07:08:40
04-08-2019 07:08:40
Have you tried the solutions discussed in the other issues on this topic: - https://github.com/huggingface/pytorch-pretrained-BERT/issues/312 - https://github.com/huggingface/pytorch-pretrained-BERT/issues/239<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> Have you tried the solutions discussed in the other issues on this topic: > > * #312 > * #239 Hi @thomwolf , I followed the instructions [here](https://github.com/BaderLab/saber/issues/135#issuecomment-494455933) to convert the checkpoint and then placing the files (pytorch_model.bin, bert_config.json, and vocab.txt) in one folder to compress it. I did not need to ignore any weights as mentioned in any solutions you mentioned above ( #312 or #239 ) I copied the compressed folder to the home folder of 'pytorch-transformers'. Then I ran the following command from there itself to run the example code (`'examples/run_glue.py'`) on my data. Then, I am trying to run the example code given by running the following command, `python ./examples/run_glue.py \ --model_type bert \ --model_name_or_path biobert.gz \ --task_name=sts-b \ --do_train \ --do_eval \ --do_lower_case \ --data_dir=$DIR \ --max_seq_length 128 \ --per_gpu_eval_batch_size=8 \ --per_gpu_train_batch_size=8` But, I get the same error as mentioned in the main discussion: `UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte` at this location: `File "./examples/run_glue.py", line 424, in main config = config_class.from_pretrained(args.config_name if args.config_name else args.model_name_or_path, num_labels=num_labels, finetuning_task=args.task_name)` Can you please tell what to change.<|||||>Try to pass the extracted folder of your converted bioBERT model to the `--model_name_or_path` :) Here's a short example: * Download the *BioBERT v1.1 (+ PubMed 1M)* model (or any other model) from the [bioBERT repo](https://github.com/naver/biobert-pretrained) * Extract the downloaded file, e.g. with `tar -xzf biobert_v1.1_pubmed.tar.gz` * Convert the bioBERT model TensorFlow checkpoint to a PyTorch and PyTorch-Transformers compatible one: `pytorch_transformers bert biobert_v1.1_pubmed/model.ckpt-1000000 biobert_v1.1_pubmed/bert_config.json biobert_v1.1_pubmed/pytorch_model.bin` * Move config `mv biobert_v1.1_pubmed/bert_config.json biobert_v1.1_pubmed/config.json` Then pass the folder name to the `--model_name_or_path` argument. You can run this simple script to check, if everything works: ```python from pytorch_transformers import BertModel model = BertModel.from_pretrained('biobert_v1.1_pubmed') ```<|||||>How we load a tensor as a pretrained model in bert<|||||>@stefan-it As per new `transformers-cli` third command would change as follows: ```bat transformers-cli convert --model_type bert \ --tf_checkpoint biobert_v1.1_pubmed/model.ckpt-1000000 \ --config biobert_v1.1_pubmed/bert_config.json \ --pytorch_dump_output biobert_v1.1_pubmed/pytorch_model.bin ```<|||||>Hello! Just to complement the @stefan-it instructions in step number 3, it works for me the following code: `import os` `from pytorch_pretrained_bert.convert_tf_checkpoint_to_pytorch import convert_tf_checkpoint_to_pytorch` `path_bin = 'my_directory/pytorch_model.bin' ` `path_bert = 'my_bert_directory/' ` `if (not os.path.exists(path_bin)): ` ` convert_tf_checkpoint_to_pytorch( ` ` path_bert + "biobert_model.ckpt", ` ` path_bert + "bert_config.json", ` ` path_bert + "pytorch_model.bin" ` ` )` My folder (biobert_v1.0_pmc) was originally 5 files: 3 Tensorflow checkpoint files A vocab file A config file <|||||>> * Move config `mv biobert_v1.1_pubmed/bert_config.json biobert_v1.1_pubmed/config.json` Tnank you very much! It did help me! <|||||>Thank you every one, this works fine now!<|||||>Dear all, Thank you very much for the suggestions on how to prepare the model to be used with hugginfaces. I am trying to use huggingfaces BertTokenizer to perform NER on biomedical data with the pre-trained weights. It seems to work fine until then point when I want to map annotated tokens to entity labels. I have token ids and prediction ids, but I cannot figure out how/where to get label_list to follow an example of mapping from https://huggingface.co/transformers/usage.html#named-entity-recognition Thank you very much for any help you can provide! Maria<|||||>Dear all, I am a newbie and I don not have so much experience. Does anyone have a full tutorial or code for a regression task, Please share with me! I would greatly appreciate it! Thank you.<|||||>@stefan-it @nipunsadvilkar Thank you for your solutions.
transformers
456
closed
max_seq_length for squad
The example script for squad sets `--max_seq_length` at 384 as default. However it seems that many paragraphs in squad exceed this length. Then what if the answer to some question lies in the truncated part of the paragraph?
04-08-2019 06:08:01
04-08-2019 06:08:01
Then you're doomed for this answer. There are a possibility to do a sliding window approach but we didn't implemented it in the examples of pytorch-pretrained-bert. Check this issue (and the linked TensorFlow issue) for a discussion on this: https://github.com/huggingface/pytorch-pretrained-BERT/issues/89
transformers
455
closed
LM fine tuning on top of a custom model
Currently the `finetune_on_pregenerated.py` script allows only fine-tuning on top of one of the five pretrained bert models. However I don't understand why there is such a restriction. I am trying to finetune a LM on top of a custom bert model (mt-dnn). Of course I can just remove the `choices`, but I am wondering if there is some rationale behind this, as none of the other example scripts contains this restriction. https://github.com/huggingface/pytorch-pretrained-BERT/blob/94980b529fad34f55c5d34c94bae2814db6773a6/examples/lm_finetuning/finetune_on_pregenerated.py#L126-L128
04-07-2019 07:58:04
04-07-2019 07:58:04
Not really, I guess we could remove this restriction and just let people point to an arbitrary folder (as we do in the other scripts). What do you think @Rocketknight1?<|||||>No, that's entirely my bad. We should allow an arbitrary folder!<|||||>Should I just remove the `choices` argument and allow any string there?<|||||>Yes, I would go for that!<|||||>Actually, if you have push access to the repo, do you want to just make that one-line change? I'd have to fork and submit a PR, which seems a bit unnecessary.<|||||>Indeed, I'll take care of it. I'm on the repo now anyway.<|||||>is this change applicable and working with run_lm_finetuning.py in 2.x?
transformers
454
closed
getting sequence embeddings for pair of sentences
given [cls] word1 word2 word3 [sep] word1 word2 [sep] If i get sequence embeddings, will word1 of sentence 1 will have context of word1 of sentence2?
04-05-2019 13:28:40
04-05-2019 13:28:40
Hi, yes.
transformers
453
closed
What‘s op-for-op meaning?
04-05-2019 09:07:31
04-05-2019 09:07:31
Hi, it means that the computation graphs of the Tensorflow and PyTorch versions are identical.
transformers
452
closed
Pregenerating data requires multiple documents
The script for pregenerating language modelling data assumes that the training corpus consists of multiple documents (i.e. a single training corpus file where empty lines separate documents). If the training corpus is made up of only one long text, the pregen script produces empty output. I have a small fix for this locally, which enables pregenerating from a single document. I could create a PR if the empty output is not seen as expected behaviour. (I can also mention that the **Installation** section of the README states that the repo has been tested with python 3.5+. However, the pregen script uses f-strings, which are a python 3.6 feature.)
04-05-2019 08:14:02
04-05-2019 08:14:02
Hi, I wrote that script! That's a tricky issue, though - what behaviour do you expect when your data is one long text? In the original BERT repo, they used the document breaks to control sampling for the NextSentence task - 'random' next sentences were selected from a different document. I'm not sure there's a "general" solution to the case when all the data is a single document, though - people whose data is all one contiguous document will probably have different expectations for how they want that data to be handled (e.g. is it okay to select a 'random' sentence 10 sentences away, or should it be much further? Are 'nearby' sentences too similar?) I suppose we could add code to support that with a controllable parameter to stop it sampling random sentences that are too close to the actual sentence, but it might be easier just to throw an error if the data is only one single document and request that the user break it into 'documents' based on whatever structure is in the data, like chapter or page breaks, or in the worst case just add a break every 1,000 lines or something. You're totally right about the f-strings, though. I'll look into updating the README!<|||||>I'm assuming that the script should just grab any random sentence in the same (only) document. I could see how that particular behavior might not be desired however. The error-throwing idea is good. I ran the script on my mammoth document, only to see empty output after 9 hours (my own fault obviously, but an exception might help similarly sloppy people).<|||||>Added PR #478 to address this. Unfortunately, it probably still won't resolve your 9 hour problem - the script will only realize there was only one document in the corpus after it's finished reading and tokenizing the whole thing! Still, at least people will get an error message properly explaining the issue and suggesting fixes now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
451
closed
Help: cannot load pretrain models from .pytorch_pretrained_bert folder
I need to run the package on a machine without internet. Copied over the ".pytorch_pretrained_bert" folder from one machine to another. Installed anaconda3 and tried to run `tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')`. Got this error: `Model name 'transfo-xl-wt103' was not found in model name list (transfo-xl-wt103). We assumed 'transfo-xl-wt103' was a path or url but couldn't find files https://s3.amazonaws.com/models.huggingface.co/bert/transfo-xl-wt103-vocab.bin at this path or url.` Do I need to copy anything else to the second machine to make it load from the cache folder? Ubuntu 16.04, pytorch 1.0
04-04-2019 19:11:46
04-04-2019 19:11:46
It is likely due to the script is not able to find the vocabulary file. So you should download the vocab first and copy it over. Then when you load the tokenizer you need to specify the path to the vocab. So if you vocab is in "/tmp/transformer_xl/" you do: ```python tokenizer = TransfoXLTokenizer.from_pretrained('/tmp/transformer_xl') ```<|||||>This works, just not what I expected. I copied over everything in .pytorch_pretrained_bert and though it would load without parameters. Now I have a bunch of file named like this, I have to figure out which model it belongs. "12642ff7d0279757d8356bfd86a729d9697018a0c93ad042de1d0d2cc17fd57b.e9704971f27275ec067a00a67e6a5f0b05b4306b3f714a96e9f763d8fb612671"<|||||>I will add a section in the readme detailing how to load a model from drive. Basically, you can just download the models and vocabulary from our S3 following the links at the top of each file (`modeling_transfo_xl.py` and `tokenization_transfo_xl.py` for Transformer-XL) and put them in one directory with the filename also indicated at the top of each file. Here is the process in your case: ```bash mkdir model cd model wget -O pytorch_model.bin https://s3.amazonaws.com/models.huggingface.co/bert/transfo-xl-wt103-pytorch_model.bin wget -O config.json https://s3.amazonaws.com/models.huggingface.co/bert/transfo-xl-wt103-config.json wget -O vocab.bin https://s3.amazonaws.com/models.huggingface.co/bert/transfo-xl-wt103-vocab.bin # optional, only if you run the evaluation script run_transfo_xl.py which uses the pre-processed wt103 corpus: # wget -O corpus.bin https://s3.amazonaws.com/models.huggingface.co/bert/transfo-xl-wt103-corpus.bin ``` Now just load the model and tokenizer by pointing to this directory: ```python tokenizer = TransfoXLTokenizer.from_pretrained('./model/') model = TransfoXLModel.from_pretrained('./model/') ``` <|||||>I'll see if I can relax the requirement on the internet connection in the next release.<|||||>The network connection check has been relaxed in the now merged #500. It will be included in the next PyPI release (probably next week). In the meantime you can install from `master`.<|||||>I have a similar issue with the BERT multilingual cased model: ``` ERROR - pytorch_pretrained_bert.modeling - Model name 'bert-base-multilingual-cased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased.tar.gz' was a path or url but couldn't find any file associated to this path or url. ``` Then I tried to execute the following code block in my Jupyter notebook: ```python import requests url = "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased.tar.gz" response = requests.get(url, allow_redirects=True, verify=False) ``` I had to set verify to false, because otherwise I get an SSL certificate error. But even now, this does not work, because the site is blocked due to our company security settings, i.e. `response.status_code` returns 403. Is there a possibility that you might publish the file in your github repo or that we could load the model from somewhere else?<|||||>Strange, I can't reproduce this. I've checked again that every model is public on our S3. Can you try again?<|||||>I retried, once using the google tensorflow hub address and once with the Amazon S3 address for the BERT model. I specified the proxy information like this: ``` proxyDict = { "http" : "http://<proxy-user>:<proxy-password>@<proxy-domain>", "https" : "http://<proxy-user>:<proxy-password>@<proxy-domain>"} ``` with our company-specific settings for proxy-user, proxy-password and proxy-domain. Then I executed the following code: ```python import requests url_google = "https://tfhub.dev/google/bert_multi_cased_L-12_H-768_A-12/1" url_amazons3 = "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased.tar.gz" response_1 = requests.get(url_google, allow_redirects=True, verify=False, proxies = proxyDict) response_2 = requests.get(url_amazons3, allow_redirects=True, verify=False, proxies = proxyDict) print("response status code for google address: {}".format(response_1.status_code)) print("response status code for amazon s3 address: {}".format(response_2.status_code)) ``` and this is what I got: ``` response status code for google address: 200 response status code for amazon s3 address: 403 ``` So unfortunately, it does not seem to work out for me. I might use the convert function you provided, but it would be nicer to be able to load the model directly from the S3.<|||||>Is it only for this model (`bert-base-multilingual-cased`) or are you blocked from accessing all the pretrained models and tokenizers?<|||||>I am blocked from accessing all the pretrained models. I tested it by looping through the values of PRETRAINED_MODEL_ARCHIVE_MAP dictionary and all requests return the status code 403.<|||||>I haven't tried it yet but maybe torch hub could help (#506) Can you try to update to PyTorch 1.1.0 (to get torch.hub) and test this: ```python import torch tokenizer = torch.hub.load('ailzhang/pytorch-pretrained-BERT:hubconf', 'bertTokenizer', 'bert-base-cased', do_basic_tokenize=False, force_reload=False) ```<|||||>Well, it does not seem to work. I had to add ```python import urllib proxy_support = urllib.request.ProxyHandler({ "http" : "http://<proxy-user>:<proxy-password>@<proxy-domain>", "https" : "http://<proxy-user>:<proxy-password>@<proxy-domain>"}) opener = urllib.request.build_opener(proxy_support) urllib.request.install_opener(opener) ``` into ~/my-virtual-env/lib/site-packages/python3.6/torch/hub.py with my-virtual-env being my pip virtual environment. Then executing the command you suggested prints the following to the console: ``` Downloading: "https://github.com/ailzhang/pytorch-pretrained-BERT/archive/hubconf.zip" to /home/U118693/.cache/torch/hub/hubconf.zip The pre-trained model you are loading is a cased model but you have not set `do_lower_case` to False. We are setting `do_lower_case=False` for you but you may want to check this behavior. Model name 'bert-base-cased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt' was a path or url but couldn't find any file associated to this path or url. ``` There is now a folder ailzhang_pytorch-pretrained-BERT_hubconf in the /home/U118693/.cache/torch/hub/ directory, but there still seems to be issues in finding that bert-cased-vocab.txt file.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
450
closed
Understanding pre-training and fine-tuning
I am confused about what these two steps actually do to the model. I would have assumed that pre-training is unsupervised (i.e. no labels) and, thus, the only thing that can be 'learned' is the embedding representations of all tokens. You can then use this pre-trained model (which is an 'empty' model but with pretrained vector representations of tokens) to actually train the whole mode. However, from the things that I read that is not the case. It's not only the embeddings that are pretrained, but also the whole model. Is that right? How can that be, without labels? For context: I want to train a Bert model on a new language and I am trying to figure out which steps I have to do to get there. Can I load custom embeddings, and do I then still need to pretrain the rest of the model (but how, which labels?). I know that my question is quite general, but the difference between pre-training and fine-tuning is not clear to me. Pre-training should be 'done once per language', but then what does the model actually learn? It it simply a seq-to-seq pre-training?
04-04-2019 13:55:47
04-04-2019 13:55:47
Maybe a good introduction on the topic are the writings of @sebastianruder: - http://ruder.io/transfer-learning/ - https://thegradient.pub/nlp-imagenet/ - the ULMFiT paper: http://nlp.fast.ai/classification/2018/05/15/introducting-ulmfit.html Regarding your specific question of training a Bert model on a new language, the computing requirements for training BERT are very high. You would probably be better with a version of ULMFiT. There is a huge multilingual initiative on the fast.ai forum that you can check out. One entry point is [here](https://forums.fast.ai/t/language-model-zoo-gorilla/14623).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
449
closed
Convert_tf_checkpoint_to_pytorch for bert-joint-baseline
Hello, I know that the format of Squad and Google NQ is different, but is there a way to convert the bert joint model for Natural Questions (https://github.com/google-research/language/tree/master/language/question_answering/bert_joint) to pytorch? I get this error 'BertForPreTraining' object has no attribute 'answer_type_output_bias'
04-04-2019 10:05:52
04-04-2019 10:05:52
I also encountered a similar problem AttributeError: 'BertForPreTraining' object has no attribute 'crf_loss' @thomwolf Looking forward to your reply<|||||>Hi, from my reading of the [Natural Questions model](https://github.com/google-research/language/blob/master/language/question_answering/bert_joint/run_nq.py#L879-L932) it doesn't seems to be directly possible to load this model in the current library (with simple hacks). You will need to define a new sub-class of `BertModel` (e.g. `BertForNaturalQA`) that reproduce the architecture of the TensorFlow model I pointed to. If you use the same name as the TensorFlow variables for the attributes of your PyTorch model you should be able to load the model with the current loading script. I don't have time to do this right now but if you want to start opening a PR, I can review it. Basically, just add another class after the `BertForQuestionAnswering` class in [`modeling.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L1130)<|||||>Thanks a lot for the reply. Will try this out. Best, <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi @raheja, do you have any updates related to this? I'm trying to do the same and would welcome any hint! <|||||>Hi @paulachocron Haven't been able to try or implement this yet. I am still using the tf version for now.
transformers
448
closed
pretrain for chinese dataset
Hi , I want to pretrain my model for chinese dataset . can i use my own vocab.txt ? and what is the format for vocab.txt ? thanks a lot
04-04-2019 08:26:30
04-04-2019 08:26:30
Hi, you should probably turn to the TensorFlow version for pre-training. This package is mostly intended to be used for fine-tuning pre-trained models. Another option for BERT-like pre-training is to use Facebook's [XLM](https://github.com/facebookresearch/XLM)
transformers
447
closed
Dynamic max_seq_length implementation?
Does the package support dynamic `max_seq_length`, e.g., if it's None, it will automatically be the maximum length in the mini-batch?
04-03-2019 23:07:41
04-03-2019 23:07:41
Hi @zijwang , the package doesn't implement any specific batching logic, only tokenizers and models. You are supposed to take care of this yourself in your scripts.
transformers
446
closed
How to select a certain layer as token's representation?
My understanding from the paper that each token is represented by a 768-dim vector from the last hidden layer. Is it correct? If so, how can I get the second-to-the last layer's parameter as token representation? model = BertForTokenClassification.from_pretrained("bert-base-uncased", num_labels=len(tag2idx)) list(model.named_parameters()) gives all weights but how can I know which parameters are from the second to the last layer? for example, I have a word 'test', how can I get the BERT features from the second to the last hidden layer for the word 'test' in a given sentence?
04-03-2019 20:57:32
04-03-2019 20:57:32
Hi @yexing99, What you need is the `hidden_state`, not the `weights`of the model. Don't use `model.named_parameters()` but just use the output of the model. Here is an example: https://github.com/huggingface/pytorch-pretrained-BERT#bert And more details here: https://github.com/huggingface/pytorch-pretrained-BERT#1-bertmodel You should do somethings like this to get the BERT features from the second to the last hidden layer if the word `test` is for example the *fifth* word in your sentence (index 4 in the sentence): ```python encoded_layers, _ = model(tokens_tensor, segments_tensors) features = [encoded_layer[4] for encoded_layer in encoded_layers[1:]] ```
transformers
445
closed
Learning rate schedules improvement + extension
re: [PR#389](https://github.com/huggingface/pytorch-pretrained-BERT/pull/389) - refactored learning rate schedules into objects - added `WarmupCosineWithHardRestartsSchedule` for cosine schedule with hard restarts - added `WarmupCosineWithWarmupRestartsSchedule` for cosine schedule with restarts where each restart uses the same warmup slope
04-03-2019 15:36:29
04-03-2019 15:36:29
Sorry for the delay in reviewing this. This is a great PR and it looks good to me. Thanks for adding some tests also. I agree with the (mostly cosmetic) comments from @marpaia. Do you think you can fix them and then we can merge?<|||||>Fixed @marpaia 's comments.<|||||>Awesome @lukovnikov, I think it looks great! Thanks for making this already awesome library even more awesome 💯 <|||||>Thanks @lukovnikov!
transformers
444
closed
if crf needed when do ner?
If crf needed when do ner? In BertForTokenClassification, just Linear is used to predict tag. If not, why?
04-03-2019 08:49:24
04-03-2019 08:49:24
A CRF gives better NER F1 scores in some cases, but not necessarily in all cases. In the BERT paper, no CRF is used and hence also no CRF in this repository. I'd presume the BERT authors tested both with and without CRF and found that a CRF layer gives no improvement, since using a CRF is kind of the default setting nowadays.<|||||>Issue #64 is a good reference for discussion on NER.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This repository have showed how to add a CRF layer on transformers to get a better performance on token classification task. https://github.com/shushanxingzhe/transformers_ner<|||||>> github.com/shushanxingzhe/transformers_ner your code does not work
transformers
443
closed
How do you train custom corpus with bert?
I am using a domain specific dataset for text classification. But major of my data points are treated with [UNK] token in Bert. Can I please get help on how to keep my custom corpus tokens?
04-03-2019 05:24:05
04-03-2019 05:24:05
Maybe it's a problem of not setting `do_lower_case=False` and using a `cased` model like in #436. Which pretrained model are you using?<|||||>@thomwolf sorry but this is completely not what I am asking for. Suppose I have medical data and the names of the medicines and diseases are not in the bert pre-trained corpus. So bert gives me [UNK] token for important medical terms. How can I train it on my custom data? Using lower cased multilingual pre-trained model. **PS: Medical data was just an example.** <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi, I'm also facing similar issue plz let me know if you have found any solution @shadylpstan <|||||>> Hi, I'm also facing similar issue plz let me know if you have found any solution @shadylpstan I was successful in modifying a couple of bert files and train it using my corpus. Though it was really a mess doing so because by default bert has a classification model of a certain number of classes. The accuracy wasn't good enough. So no idea it is a worth giving it shot from your end.
transformers
442
closed
Unable to incrementally train BERT with custom training
I have trained BERT with custom small training. I am unable to train the same on QQP and then with custom train. Any discussion will be appreciated.
04-02-2019 20:07:32
04-02-2019 20:07:32
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
441
closed
Fix bug in run_squad.py
I fired an issue in Google's repo, https://github.com/google-research/bert/issues/540#issue-428344784 After testing, I found it is indeed a bug. We should not put these chunks with `start_position==0` and `end_position==0` into training set. Thanks for code review.
04-02-2019 18:10:29
04-02-2019 18:10:29
Hi @MottoX, sorry for the delay on reviewing this. It seems to make sense to me. What kind of testing are you referring to? Could you share a bit more about them?<|||||>Hi, @thomwolf I was trying to train a BERT-based model on NewsQA, a document-level QA dataset, using `run_squad.py`. I used 384 for max_seq_length and 128 for doc_stride. The model got an extremely bad result. I found there were numerous chunks with start_position and end_position as zeros. Then I checked the implementation of `run_squad.py` as well as `BertForQuestionAnswering`, which is basically BERT + one fully connected layer. I suspected that such out-of-span chunks are harmful for model to converge. So I modified the script to drop all these out-of-span chunks during training and got an acceptable result. I am sorry that I do not remember the exact numbers before and after doing this. I think the reason why the problem is not identified before is that context length (token-level) in SQuAD is not that long, which means the model can still easily converge anyway, with or without out-of-span chunks during training as they are really rare. However, when we have numerous out-of-span chunks in the training set, there will be a problem for model to converge.<|||||>I looked at the code again, and found it is quite confusing, since chunks with `start_position==0` and `end_position==0` are also used for the cases where we have examples that do not have answer (for version_2_with_negative). So the current fix seems not compatible with this. Now I am not exactly sure whether we need to train the model with those out-of-span chunks. However, from my last experience, there should be a problem when we have too many such out-of-span chunks in the training set and the model is biased to predicting the result on the first token (CLS), despite the fact that `write_predictions` filters out those invalid predictions.
transformers
440
closed
How can i use bert for finding word embeddings
Hi all, Can I use pre-trained BERT for finding fixed sized word embeddings, like 300D Glove or Word2vec word embeddings?.
04-02-2019 16:18:34
04-02-2019 16:18:34
I extract features like examples in extarct_features.py. But went I used these features(the last encoded_layers) as word embeddings, I got a worse result than using 300D Glove. I also used these features to compute the cos similarity for each word in sentences, I found that all values were around 0.6<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi. I really need someone's help regarding using the bert model to extract the word embeddings: This script: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/extract_features.py doesn't exist anymore. Any suggestions instead that replaced this script?
transformers
439
closed
DistributedDataParallel Not Working
Hi, I've been stuck on this for days, so I decided to make an issue. When I run DistributedDataParallel with the PyTorch launch module, I see that one machine will start training without waiting for the other one to start; this is different than if I run it without the launch module. WIthout the launch module, I am also letting one process have access to multiple GPUs, rather than just one. I'm following your implementation of DistributedDataParallel in your Medium article: https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255
04-02-2019 16:18:27
04-02-2019 16:18:27
Hi @moinnadeem, What is the hardware you are using and what is the exact command you are using to run DistributedDataParallel with the PyTorch launch module? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
438
closed
convert_tf_checkpoint_to_pytorch 'BertPreTrainingHeads' object has no attribute 'squad'
Trying to convert BERT checkpoints to pytorch checkpoints. It worked for default uncased bert_model.ckpt. However, after we did a custom training of tensorflow version and then tried to convert TF checkpoints to pytorch, it is giving error: 'BertPreTrainingHeads' object has no attribute 'squad' When printed ``` elif l[0] == 'output_bias' or l[0] == 'beta': pointer = getattr(pointer, 'bias') elif l[0] == 'output_weights': pointer = getattr(pointer, 'weight') else: print("--> ", str(l)) ############### printed this print("==> ", str(pointer)) ################# printed this pointer = getattr(pointer, l[0]) ``` output: ``` --> ['squad'] ==> BertPreTrainingHeads( (predictions): BertLMPredictionHead( (transform): BertPredictionHeadTransform( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): BertLayerNorm() ) (decoder): Linear(in_features=768, out_features=30522, bias=False) ) (seq_relationship): Linear(in_features=768, out_features=2, bias=True) ) ``` - Can you please tell us what is happening? Does tensorflow add something during finetuning? Not sure from where squad word got into tensorflow ckpt file. - And, what needs to be done to fix this? - Are you planning to fix this and release updated code?
04-02-2019 12:31:25
04-02-2019 12:31:25
Hi @SandeepBhutani, Can you point me to the script you use for finetuning in Tensorflow?<|||||>Hi @thomwolf : Thanks for reply. Fine Tuning is done by mentioning do_train=True on run_squad.py (From google bert release github page: [https://github.com/google-research/bert](https://github.com/google-research/bert)) Internally, it calls `estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)` Finetuning file was also same train-v1.1.json.. [https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json) Sample header of train file is : `{"data": [{"title": "University_of_Notre_Dame", "paragraphs": [{"context": "Architecturally` Following observation in case it is useful: While converting checkpoint of origional uncased bert_model.ckpt following log is printed: ``` Loading TF weight bert/encoder/layer_9/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_9/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_9/output/dense/kernel with shape [3072, 768] Loading TF weight bert/pooler/dense/bias with shape [768] Loading TF weight bert/pooler/dense/kernel with shape [768, 768] Loading TF weight cls/predictions/output_bias with shape [30522] Loading TF weight cls/predictions/transform/LayerNorm/beta with shape [768] Loading TF weight cls/predictions/transform/LayerNorm/gamma with shape [768] Loading TF weight cls/predictions/transform/dense/bias with shape [768] Loading TF weight cls/predictions/transform/dense/kernel with shape [768, 768] Loading TF weight cls/seq_relationship/output_bias with shape [2] Loading TF weight cls/seq_relationship/output_weights with shape [2, 768] Building PyTorch model from configuration: { ``` While converting checkpoint after finetuning is done (model.ckpt-9000) following log is printed: ``` Loading TF weight bert/encoder/layer_9/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_9/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/pooler/dense/bias with shape [768] Loading TF weight bert/pooler/dense/kernel with shape [768, 768] Loading TF weight cls/squad/output_bias with shape [2] Loading TF weight cls/squad/output_bias/adam_m with shape [2] Loading TF weight cls/squad/output_bias/adam_v with shape [2] Loading TF weight cls/squad/output_weights with shape [2, 768] Loading TF weight cls/squad/output_weights/adam_m with shape [2, 768] Loading TF weight cls/squad/output_weights/adam_v with shape [2, 768] Loading TF weight global_step with shape [] Building PyTorch model from configuration: { ``` _cls/predictions_ is gone and _cls/squad_ appeared<|||||>After reading the code of both tensorflow and pytorch version, figured out that tensorflow version is referring squad in create_model, like below (**cls/squad/output_weights**): ``` def create_model(bert_config, is_training, input_ids, input_mask, segment_ids, use_one_hot_embeddings): """Creates a classification model.""" model = modeling.BertModel( config=bert_config, is_training=is_training, input_ids=input_ids, input_mask=input_mask, token_type_ids=segment_ids, use_one_hot_embeddings=use_one_hot_embeddings) final_hidden = model.get_sequence_output() final_hidden_shape = modeling.get_shape_list(final_hidden, expected_rank=3) batch_size = final_hidden_shape[0] seq_length = final_hidden_shape[1] hidden_size = final_hidden_shape[2] output_weights = tf.get_variable( "cls/squad/output_weights", [2, hidden_size], initializer=tf.truncated_normal_initializer(stddev=0.02)) output_bias = tf.get_variable( "cls/squad/output_bias", [2], initializer=tf.zeros_initializer()) ``` Any suggestion, what should be tweaked? And where (create_model in tensorflow version should be changed or convert_tf_checkpoint_to_pytorch in pytorch version should be changed?) Looks like the definition of pytorch model (BertForPreTraining mentioned in conversion script) is different from tensorflow version, when fine tuned. That is why cls -> squad -> output_bias is not found. Is my understanding correct? If yes, is correct class already available which we can refer while conversion?<|||||>Hi @thomwolf , To make the conversion work, in modeling.py of pytorch version, I have added the class and 1 line of code in BertPreTrainingHeads below. After this conversion is happening. But I am not sure if I have done correct thing (_being a beginner in both tf and pytorch_). Would you like to validate/correct please. ``` class SandeepSquadClass(nn.Module): ########this class sandeep added def __init__(self, config, bert_model_embedding_weights): super(SandeepSquadClass, self).__init__() self.weight = Variable(torch.ones(2, config.hidden_size), requires_grad=True) self.bias = Variable(torch.ones(2), requires_grad=True) def forward(self): print("What to do?") class BertPreTrainingHeads(nn.Module): def __init__(self, config, bert_model_embedding_weights): super(BertPreTrainingHeads, self).__init__() self.predictions = BertLMPredictionHead(config, bert_model_embedding_weights) #sandeep code below 3 apr self.squad = SandeepSquadClass(config, bert_model_embedding_weights) ###this line sandeep added self.seq_relationship = nn.Linear(config.hidden_size, 2) ```<|||||>Hi @SandeepBhutani, I pushed a commit to master which should help you do this kind of thing. First, switch to master by cloning the repo and then follow the following instructions: The `convert_tf_checkpoint_to_pytorch` conversion script is made to create `BertForPretraining` model which is not your use case but you can load another type of model by reproducing the behavior of this script as follows: ```python from pytorch_pretrained_bert import BertConfig, BertForTokenClassification, load_tf_weights_in_bert # Initialise a configuration according to your model config = BertConfig.from_pretrained('bert-XXX-XXX') # You will need to load a BertForTokenClassification model model = BertForTokenClassification(config) # Load weights from tf checkpoint load_tf_weights_in_bert(model, tf_checkpoint_path) # Save pytorch-model print("Save PyTorch model to {}".format(pytorch_dump_path)) torch.save(model.state_dict(), pytorch_dump_path) ```<|||||>Thanks @thomwolf ... After following change the checkpoint generated smoothly. ``` #model = BertForPreTraining(config) ##commented this model = BertForTokenClassification(config, 2) ## Added this ``` Let us give a try to run prediction using new .bin file. Hope the results would be same as using tensorflow version with .ckpt file. Appreciate 👍 <|||||>I downloaded tensorflow checkpoints for domain specific bert model and extracted the zip file into the folder pretrained_bert which contains the following the three files - model.ckpt.data-00000-of-00001 - model.ckpt.index - model.ckpt.meta I used the following code to convert tensorflow checkpoints to pytorch ``` import torch from pytorch_transformers.modeling_bert import BertConfig, BertForPreTraining, load_tf_weights_in_bert tf_checkpoint_path="pretrained_bert/model.ckpt" bert_config_file = "bert-base-cased-config.json" pytorch_dump_path="pytorch_bert" config = BertConfig.from_json_file(bert_config_file) print("Building PyTorch model from configuration: {}".format(str(config))) model = BertForPreTraining(config) # Load weights from tf checkpoint load_tf_weights_in_bert(model, config, tf_checkpoint_path) # Save pytorch-model print("Save PyTorch model to {}".format(pytorch_dump_path)) torch.save(model.state_dict(), pytorch_dump_path) ``` I got this error when I ran the above code **NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for pretrained_bert/model.ckpt** Any help is really appreciated............<|||||>Seems like the script cannot find your checkpoint. Try giving it the full absolute path to the file.<|||||>@thomwolf Thanks, I didn't get any error when I gave absolute path of the file. <|||||>I was trying to convert my fine tuned model to pytorch using the following command. ` tf_checkpoint_path='models/model.ckpt-21' bert_config_file='PRETRAINED_MODELS/uncased_L-12_H-768_A-12/bert_config.json' pytorch_dump_path='pytorch_models/pytorch_model.bin' python convert_bert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=$tf_checkpoint_path --bert_config_file=$bert_config_file --pytorch_dump_path=$pytorch_dump_path ` The issue that I face is given below. Any help would be appreciated. Traceback (most recent call last): File "convert_bert_original_tf_checkpoint_to_pytorch.py", line 65, in args.pytorch_dump_path) File "convert_bert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch load_tf_weights_in_bert(model, config, tf_checkpoint_path) File "/home/cibin/virtual_envs/pytorch/lib/python3.7/site-packages/transformers/modeling_bert.py", line 98, in load_tf_weights_in_bert pointer = getattr(pointer, 'classifier') File "/home/cibin/virtual_envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 585, in getattr type(self).name, name)) AttributeError: 'BertPreTrainingHeads' object has no attribute 'classifier'<|||||>**A possible solution if you're copying a SQuAD-fine-tuned Bert from TF to PT** Issue: `AttributeError: 'BertPreTrainingHeads' object has no attribute 'classifier'` It works for me by doing the following steps: Step 1. In the script `convert_tf_checkpoint_to_pytorch.py` (or `convert_bert_original_tf_checkpoint_to_pytorch.py`): - Replace all `BertForPreTraining `with `BertForQuestionAnswering`. Step 2. Open the source code file `modeling_bert.py` in your package `site-packages\transformers`: - In the function `load_tf_weights_in_bert`, replace `elif l[0] == 'squad':` `pointer = getattr(pointer, 'classifier')` with `elif l[0] == 'squad':` `pointer = getattr(pointer, 'qa_outputs')` It should work since `qa_outputs` is the attribute name for the output layer of `BertForQuestionAnswering` instead of `classifier`. Step 3. After copying, check your pytorch model by evaluating the `dev-v2.0.json` with a script like this: `python run_squad.py --model_type bert --model_name_or_path MODEL_PATH --do_eval --train_file None --predict_file dev-v2.0.json --max_seq_length 384 --doc_stride 128 --output_dir ./output/ --version_2_with_negative` where `output_dir` should contain a copy of the pytorch model. This will result in an evaluation like this: `{ "exact": 72.99755748336563, "f1": 76.24686988414918, "total": 11873, "HasAns_exact": 72.82388663967612, "HasAns_f1": 79.33182964482165, "HasAns_total": 5928, "NoAns_exact": 73.17073170731707, "NoAns_f1": 73.17073170731707, "NoAns_total": 5945, "best_exact": 74.3619978101575, "best_exact_thresh": -3.6369030475616455, "best_f1": 77.12234803941384, "best_f1_thresh": -3.6369030475616455 }` for a `BERT-Base` model. However, if using `BertForTokenClassification` instead, the model will not be correctly copied since the structures for the classification layer are different. I tried this and got a model that had a f1 score of 10%.<|||||>AttributeError: 'BertForTokenClassification' object has no attribute 'predict' How do I use BERT trained model for prediction?<|||||>@rashibudati, please take a look at the docs, namely the [Usage](https://huggingface.co/transformers/usage.html#named-entity-recognition) section which shows how to use token classification models.<|||||>@Hya-cinthus Thank you so much! This saved me a lot of headache!
transformers
437
closed
Fix links in README
Fixed two broken links, i.e., _**convert_tf_checkpoint_to_pytorch.py**_ and _**run_squad.py**_.
04-02-2019 09:24:16
04-02-2019 09:24:16
transformers
436
closed
BertTokenizer.from_pretrained('bert-base-multilingual-cased') does not recognize Korean
I'm using the pre-trained multilingual tokenizer to tokenize some Korean texts, but it seems like this tokenizer is unable to recognize any Korean text at all. For example, when running on all the Universal Dependency Korean treebanks, this tokenizer fails to tokenize (it produces '[UNK]') the following characters. I know some of them are not Korean, but most of them are. More confusingly, when I check the `vocab.txt` file for the [BERT-Base, Multilingual Cased](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) in the original repo (https://github.com/google-research/bert), it shows that Korean characters (I manually checked for '렬') are in the vocabulary. This makes me wonder whether there is any bug in the tokenizer provided in this PyTorch repo. I'm considering switching back to the tokenizer in the original code base. Will this create any compatibility issues? {'燄', '렬', '툇', '촌', '내', '윙', '쌈', '꿇', '톨', '葚', '솟', '힌', '래', '銎', '凊', '컸', '톤', '🙏', '졍', '의', '슭', '옷', '챠', '囒', '뷘', '멤', '싱', '츠', '령', '겸', '댄', '휙', '醗', '돕', '톳', '던', '페', '띔', '짚', '락', '앙', '왜', '핼', '컫', '쿵', '왠', '기', '냇', '칼', '런', '험', '드', '궐', '얄', '찜', '렝', '빈', '후', '싫', '雎', '덕', '틀', '딴', '굶', '뜸', 'ㅑ', '🌹', '손', '펴', '쏠', '튀', '화', '슘', '대', '왕', '잣', '딨', '듈', '뎀', '鑛', '칭', '젝', '뉴', '✋', '쑤', '작', '묘', '飮', '새', '쑥', '嘈', '캐', '笳', '랩', '💖', '히', '맵', '냈', '츰', '꿨', '딤', '밑', '낼', '訇', '뀌', '뻑', '혔', '겝', '얹', '랍', '😘', '옅', '징', '권', '욱', '탭', '🙈', '흩', '벳', '밈', '굴', '연', '엑', '綉', '숯', '무', '셋', '벌', '텃', '클', '쫑', '덟', '쳬', '멎', '칵', '팅', '뜨', '땄', '㎡', '끽', '뿍', '담', '펭', '쉰', '있', '쏘', '탤', '濞', '치', '옵', '잃', '언', '빽', '짠', '넙', '깆', '콤', '러', '픽', '푸', '회', '낍', '스', '쩨', '진', '풋', '횡', '탑', '솝', '딕', '영', '볐', '씹', '뒷', '퀼', '렸', '별', '옇', '얘', '謔', 'Ↄ', '팎', '긋', '뽐', '현', '켠', '룻', '소', '몇', '쨌', '벨', '쩐', '붐', '트', '떴', '😊', '줌', '꽤', '밭', '쓴', '매', '쳤', '뱃', '콰', '먕', '봤', '😜', '베', '첩', '년', '짐', '뻤', '괭', '함', '鍝', '도', '폭', '꾸', '뜩', '괞', '닦', '몹', '👶', '팰', '콕', '떻', '킬', '강', '찡', '궂', '꿉', '빴', '즈', '데', '껑', '엘', '약', '천', '씻', '💚', '㈜', '롯', '털', '야', '혈', '펀', '👑', '😁', '뺐', '繽', '닙', '쪽', '행', '넉', '휩', '퀀', '쇠', '셑', '댑', '얽', '질', '뻬', '🌀', '덥', '骯', '뀔', '룩', '팀', '愙', '괜', '몽', '냐', '씬', '서', '또', '썬', '절', '갱', 'ㅊ', '왼', '펠', '놈', '쯩', '름', '토', '받', '璣', '咧', '츄', '깃', '헬', '잇', '덜', '곡', '것', '겹', '지', '檣', '억', '셀', '픔', '륙', '돈', 'ㅜ', '믈', '멈', '빗', '을', '둘', '탱', '얇', '식', '💙', '틱', '똑', '줄', '궤', '잼', '착', '廝', '枸', '볏', '샴', '疙', '飈', '💟', '댁', '날', '킵', '동', '룸', '😱', '헐', '부', '웬', '램', '윗', '숍', '齲', '빼', '빡', '혐', '뮌', '너', '란', '땠', '개', '이', '끔', '麩', '표', '훈', '간', '바', '옆', '틸', '탄', '살', 'ʈ', '꽂', '벙', '뱅', '닭', '멀', '줍', '✌', '짊', '葯', '쨍', '쥔', '않', '술', '육', '햇', '텁', '뵈', '쿄', '😏', '웅', '막', '낮', '브', '펼', '색', '갰', '께', '음', '쐬', '숱', '빛', '텝', '젠', '준', '키', '왓', '팝', '순', '떳', '번', '팡', '핫', '팬', '畹', '깍', '슷', '겋', '위', '꺽', '껀', '낸', '숴', '잠', '휴', '訐', '貮', '티', '늪', '깔', '涮', '촐', '좀', '쭐', '턴', '같', '먼', '꾹', '쎌', '충', '땅', '희', '敉', '캠', '넋', '맑', '갇', '凈', '를', '쌌', '잿', '똥', '핵', '올', '윈', '접', '뮤', '귤', '갖', '촘', '웍', '속', '댕', '砒', '謇', '郫', '낄', '죽', '가', '깁', '뷸', '는', '쏜', '뉜', '릉', '쓰', '텼', '환', '皰', '띠', '즙', '자', '톱', '봄', '갠', '쉽', '항', '답', '넣', '옐', '머', '빔', '쬐', '듄', '썰', '로', '칩', '마', '뽂', '숀', '침', '찍', '캘', '노', '뼈', '😦', '놔', '특', '곁', '챙', '밥', '형', '팠', '섬', '焗', '닉', '관', '쉐', '잡', '슐', '뗏', '획', '추', '욘', '꾼', '좌', '쫀', '나', '킹', '뼛', '은', '랴', '깻', '눈', '욕', '하', '🌓', '찢', '젊', '궁', '총', '놋', '둬', '뮬', '詝', '낭', '얌', '셨', '첫', '柢', '셉', '🎂', '닥', '결', '겪', '😕', '⅛', '뾰', '뢰', '륨', '좇', '꼐', '옌', '둔', '💜', '금', '평', '면', '癤', '암', '허', '뱉', '銠', '꿔', '✔', '멘', '겟', '싯', '씁', '림', '볕', '넨', '휠', '☝', '仚', '쟈', '덤', '벅', '즘', '숙', '철', 'ㄹ', '재', '낚', '쉬', '롬', '휘', '렀', '荑', '득', '옮', '렙', '신', '랏', '헉', '푼', '났', '퐁', '쌥', '볶', '밝', '샀', '뽑', '춘', '앓', '곧', '찮', '믿', '늙', '녘', '쌤', '예', '밍', '요', '엣', '딱', '못', '뺨', '렉', '모', '쟁', '퍽', '鬪', '팜', 'ɟ', '틈', '체', '벚', '뛴', '훅', '윤', '짤', '샐', '궈', '댐', '룡', '덧', '깊', '賬', '택', '넬', '흰', '髖', '쁠', '투', '만', '창', '묽', '썼', '🍰', '셩', '僊', '수', '豢', '癥', '삼', '춰', '렁', '퀴', '슛', '쌉', '😪', '묶', '정', '탓', '얏', '💏', '버', '쯔', '밉', '챈', '♩', '리', '떨', '료', '했', '잘', '뤄', '붕', '쩌', '탁', '혀', '繙', '껴', '박', '᾽', '뜀', '홉', '팔', '컨', '饃', '釩', '웰', '꼿', '薺', '縕', '깜', '케', '세', '鎰', '褂', '슌', '벵', '꼈', '카', '甍', '檨', '寗', '빵', '찔', '엊', '오', '통', '✨', '꽝', '니', '첨', '맺', '랑', '헙', '믄', '더', '쳐', '🏼', '옳', '쁨', '鈇', '취', '玕', '럽', '푹', '눠', '제', '디', '벡', '메', '걔', '訄', '滹', '眈', '☎', '물', '갓', '없', '😅', '송', '뭇', '멸', '빅', '늉', '난', '휼', '흙', '깥', '옥', '쫙', '례', '폰', '맏', '되', '훨', '헝', '짧', '쪼', '▶', '끓', '톡', '흥', '홈', '및', '콘', '빠', '등', '르', '깝', '밟', '펑', '끼', '잭', '와', '넷', '율', '찼', '酃', '균', '뿜', '악', '튈', '률', '😋', '걱', '과', '퀄', '튜', '끈', '늑', '렐', '쫄', '멋', '―', '탈', '憮', '잉', '醺', '粦', '맣', '큘', '曧', '쿼', '파', '둡', '탕', '抔', '솥', '척', '고', '🎵', '숭', '람', '벗', '잤', 'ௌ', '옻', '叨', '듀', '늘', '염', '늦', '돌', '껏', '귄', '🎄', '봉', '움', '뭄', '낯', '템', '멍', '랜', '💪', '헌', '샹', '압', '몰', '㎢', '큼', '윽', '🌒', '퀵', '덩', '잊', '眯', '피', '학', '넴', '갬', '녹', '출', '꼴', '🌑', '퍈', '쎄', '굳', '외', '붓', '犂', '슴', '늬', '글', '룬', '탠', '듣', '엿', '칸', '뿌', '갸', '냄', '깡', '괴', '든', '불', '툰', '甪', '箬', '빳', '한', '晳', '킷', '쭉', '둑', '꿍', '쭈', '맙', '鷄', '뜯', '폄', '酩', '퓨', '조', '롤', '프', '뻔', '교', '쿨', '펜', '닯', '🍃', '크', '뀐', '紈', '집', '땡', '놓', '훔', '즌', '헨', '녀', '켤', '따', '🍫', '포', '땐', '랙', '콩', '합', '농', '😍', '전', '엥', '텄', '💬', '긍', '😌', '뮐', '슝', '꽃', '븐', '🌼', '岦', '빤', '찐', '☺', '둥', '랬', '뚤', '깽', '뺏', '목', '슨', '웨', '굵', '칠', '뜻', '려', '산', '끕', '커', '튿', '단', '쌀', '햄', '牾', '켰', '륜', '밤', '골', '익', '흉', '뷔', '랄', '뱀', '숨', '돔', '붙', '찧', '둠', '갔', '럿', '져', 'ʂ', '할', '존', '챌', '늠', '픈', '뷰', '댈', '갑', '컹', '쪄', '맜', '騏', '칡', '餮', '앎', '누', '굿', '幪', '멕', '백', '곤', '싸', '꼬', '헷', '펄', '럴', '饕', '뚫', '芘', 'ᆢ', '젯', '릴', '북', '💛', '룹', '핀', '옛', '병', '낟', '품', '짱', '겼', '쉴', '앳', '닳', '쾌', '힐', '경', '짓', '델', '酊', '턱', '끌', '횃', '호', '씀', '켯', '‒', '컬', '밋', '인', '향', '렷', '蚺', '몸', '❄', '놀', '탯', '쏭', '왈', '점', '팟', '😡', '써', '념', '뒬', '혹', '폐', '격', '앨', '본', '듭', '엷', '헴', '루', '콜', '묏', '😒', '큰', '샛', '았', '돗', '퓌', '장', '깎', '훼', '엾', '맞', '윌', '윷', '졔', '꿰', '쩍', '양', '킨', '👏', '줬', '왔', '볍', '엌', '샤', '£', '삶', '힘', '꽁', '길', '급', '럭', '엮', '瘩', '땀', '구', '쏟', '몫', '쫌', '✓', '으', '딛', '룰', '놉', '력', '齬', '뒀', '플', '꿩', '확', '졌', 'ㅇ', '㏊', 'ɖ', '넝', '빌', '여', '믹', '삭', '뽕', '뽈', '楂', '킴', '었', '캄', '사', '웠', '청', '뀨', '셈', '말', '넜', '📚', '맬', '삐', '紜', '챔', '🙊', '논', '쥬', '딧', '쁘', '윔', '들', '뛸', '춧', '좋', '풍', '퓰', '뙤', '흽', '튼', '뮈', '끗', '공', '짖', '볼', '웃', '블', '랫', '뚜', '굉', '묵', '꿀', '톰', '객', '떤', '낙', '찾', '코', '셸', '녔', '원', '넥', '뗀', '펫', '셧', '袥', '羱', '심', '즉', 'ㅎ', '앤', '뜬', '냥', '塍', '쿠', '캔', '숲', '핏', '뭐', '댓', '먹', '얻', '엠', '됨', '띄', '뗄', '봇', '듐', '칫', '두', '롱', '능', '측', '崞', '융', '켓', '딩', '효', '㎝', '거', '샘', '변', '립', '맹', '괄', '쿰', '팸', '판', '앞', 'ㄷ', '밖', '켜', '밌', '💋', '맸', '👪', '임', '습', 'Ⓣ', '팽', '뻥', '릭', '높', '성', '짭', '첼', '덫', '슬', '迢', '월', '견', '미', '값', '터', '촉', '근', '옴', '김', '끝', '㎞', '센', '쏴', '呔', '엇', '떡', '뿔', '넛', '승', '섰', '곳', '직', '텅', '륭', '법', '잖', '😲', '겠', '곽', '용', '죠', '닿', '뭍', '🌔', '생', '액', 'ㅁ', '탬', '뉘', '흡', '썽', '쟝', '♬', '낫', '폼', '층', '곬', '😔', '칙', '홀', '얕', '낡', '퇴', '臿', '춤', '꿋', '알', '잎', '략', '맡', '👼', '엔', '뫼', '우', '편', '팥', '였', '蛰', '괘', '鵪', '틴', '뇨', '녕', 'ৌ', '규', '튤', '흘', '👍', '췄', '딪', '郪', '유', '썹', '럼', '워', '선', '그', '씩', '셜', '떠', '캥', '갚', '뭡', '찰', '겁', '샌', '😳', '좁', '뇌', '끊', '계', '렘', '협', '밸', '멜', '안', '배', '솜', '귐', '챗', '셔', '꼼', '앗', '친', '꼽', '삿', '시', '각', '잽', '뎠', '울', '필', '😀', '찬', '쭙', '띤', '솔', '타', '꽹', '축', '밧', '騁', '텨', '완', '봐', '종', '僴', '箚', '발', '딸', '뗐', '깅', '摑', '십', '껄', '큽', '🐼', '욤', '앉', '섯', '놨', '낳', '춥', '굼', '鬢', '독', '첸', '젼', '때', '堈', '錛', 'ɳ', '됩', '테', '죕', '잴', '보', '녁', '뺀', '쯤', '업', '렴', '릿', '懊', '잔', '숫', '웹', '뭉', '컴', '싶', '훌', '흑', '툴', '귓', '닻', '최', '❤', '빚', '겅', '뒹', '벽', '주', '쨋', '읽', '넌', '닮', '싼', '톈', '겨', '쐰', '냅', '듬', '째', '혼', '빻', '꺾', '므', '뚝', '꼭', '琺', '긁', '앵', '웸', '삽', '쥘', '납', '걷', '짰', '넓', '쓸', '걀', '헹', '쾰', '맘', '돋', '콸', '뛰', '걸', '분', 'ㅐ', '챘', '설', '砝', '갯', '룽', '온', '켄', '섭', '쫓', '쌩', '틋', '😂', '열', '퍼', 'ㅅ', '뒤', '채', '맨', '牘', '샵', '헤', '봅', '곯', '잌', '처', '굽', '극', '촨', '렌', '뎅', '폈', '응', '됐', '비', '차', '랭', '닐', '玷', '증', '섞', '컵', '량', '멩', '瓠', '졸', '녜', '빙', '딘', '썩', '눌', '잰', '뤼', '흠', '梃', '겐', '쑹', '남', '坮', '쩡', '묻', '롭', '췌', '겜', '국', '씨', '씽', '즐', '땟', '많', '덴', '氚', '쇼', '韆', '맴', '깬', '횟', '꺼', '렵', '얼', '반', '낀', '감', '榫', '줘', '죄', '얗', '린', '슈', 'ㅍ', '석', '님', '겉', '방', '벤', '☀', '팍', '꽉', '튕', '뻗', '쩔', '힙', '갈', '역', '듯', '邗', '컥', '牴', '돼', '셍', '붉', '흄', '₴', '💎', '젖', '달', '황', '홋', '눅', '퀘', '躓', '낱', '찌', '군', '촬', '😽', '느', '뻘', '눔', '갤', '헛', '입', '혁', '🌕', '엎', '젓', '컷', '몬', '록', '냑', '아', '🐩', '嚨', '애', '릅', '짝', '裋', '짜', '큐', '씌', '덮', '얀', '뜰', '론', '쌍', '앱', '쩝', '눕', '팩', '찹', '혜', '며', '텔', '뮨', '네', '싣', '샷', '곱', '😉', '빨', '헥', '딜', '옹', '흔', '킥', '태', '뿐', '에', '꾀', '껍', '겔', '끄', '퉁', '렇', '㎍', '닌', '샅', '싹', '📌', '륵', '縈', '흐', '짙', '뀜', '명', '핸', '💕', '뻐', '패', '짬', '범', '류', '쿤', '‐', '🌸', '뎌', '될', '뚱', '셰', '쁜', '벼', '밀', '꿈', 'ㅋ', '다', '상', '라', '엽', '鞦', '舀', '닫', '쫘', '운', 'ㄴ', '어', '츌', '긴', '젤', '맥', '캉', '풀', '댔', '읍', '氘', '련', '훗', '쌓', '해', '곰', '뭘', '망', '저', '깨', '건', '당', '뽀', '섹', '활', '샨', '랗', '밴', '릎', '캡', '嫘', '널', '핍', '嘮', '앴', '酆', '🍷', '중', '詼', '책', '엉', '된', 'ㅓ', '켈', '툼', '까', '실', '젭', '랐', '봣', '일', '닷', '초', '콧', '쥐', '렛', '櫾', '핑', '漶', '썸', '광', '넘', '떼', '탐', '곶', '쩜', '적', '쇄', '폴', '맛', '링', '레', '복', '띈', '늄', '텍', '뭔', '닝', '愨', '콥', '꿎', '참', '족', '른', '훑', '읊', '엄', '😃', '섀', '게', '펙', '삯', '맷', '烺', '검', '귀', '텐', '껌', '멧', '잦', '갗', '민', '찻', '긔', '뽁', '겊', '갉', '😭', '션', '츨', '릇', '눴', '깐', '껭', '탔', '닛', '문', '홍', '섣', '낌', '냉', '쉼', '撘', '띨', '🌈', '숟'}
04-01-2019 22:47:06
04-01-2019 22:47:06
Hi @icewing1996, This comes from the fact that your tokenizer has `do_lower_case=True` but you load an uncased model. Try loading the tokenizer like this `tok = BertTokenizer.from_pretrained('bert-base-multilingual-uncased', do_lower_case=False)`. This is actually a common issue and I see Jacob has added [a test](https://github.com/google-research/bert/blob/master/tokenization.py#L28) in the google repo to check the coherence between the lower casing option and the model checkpoint name. I will add a similar test.<|||||>I used the Korean wiki corpus with the "do_lower_case = True" option to create pretrain data. Most of the data is generated as [UNK] token as followings: INFO:tensorflow:next_sentence_labels: 1 INFO:tensorflow:*** Example *** INFO:tensorflow:tokens: [CLS] gr ##ace [UNK] [MASK] [UNK] [UNK] [UNK] 2 ᄀ ##ᅢ [UNK] [UNK] [UNK] [UNK] gr ##ace - 1 ᄀ ##ᅪ gr ##ace - 2 ᄀ ##ᅡ ( [UNK] e ##ss ##p - 2 a , e ##ss ##p - 2 [MASK] 유전적 [UNK] 220 km [UNK] ᄀ ##ᅥ ##ᄅ ##ᅵ [UNK] [UNK] ᄆ ##ᅧ [UNK] [UNK] [UNK] ᄆ ##ᅧ [UNK] , [UNK] , [UNK] [UNK] [UNK] ##댜 [MASK] [MASK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] . [UNK] [UNK] [UNK] , n ##as ##a , j ##pl , d ##l ##r [UNK] [UNK] [UNK] [MASK] [MASK] [UNK] [MASK] [MASK] [MASK] [MASK] [MASK] [UNK] j ##pl [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] . 2002 [UNK] 3 [UNK] [UNK] [UNK] ᄀ ##ᅵ ##ᄌ ##ᅵ [UNK] [UNK] [UNK] [UNK] [UNK] 2017 [UNK] 10 [UNK] [MASK] ᄀ ##ᅡ [UNK] [UNK] [UNK] . [UNK] [UNK] [UNK] [UNK] [UNK] 5 [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] 15 [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] ᄇ ##ᅩ ##ᄋ ##ᅵ ##ᄌ ##ᅥ [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] . [SEP] [UNK] [MASK] [UNK] 1956 [UNK] [UNK] [UNK] [MASK] [UNK] [UNK] [UNK] [UNK] [UNK] [MASK] ᄀ ##ᅪ [UNK] [UNK] [UNK] [UNK] [UNK] ᄆ ##ᅩ ##ᄃ ##ᅮ 11 [UNK] [UNK] [UNK] [UNK] [MASK] ##졌 [MASK] [MASK] ᄀ ##ᅩ [UNK] [UNK] . [UNK] [UNK] [UNK] 3 [UNK] · [UNK] [MASK] [MASK] ᄀ ##ᅲ ##ᄆ ##ᅩ ᄅ ##ᅩ [UNK] [UNK] [UNK] [UNK] ᄇ ##ᅩ [UNK] [UNK] [UNK] [UNK] ( 人 ) [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] . ᄀ ##ᅳ [MASK] [MASK] ᄀ ##ᅪ [UNK] , [MASK] [MASK] [MASK] [UNK] , [UNK] , ##ᅢ [UNK] [UNK] 착취 [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [MASK] [UNK] . [UNK] ( 任 忠 [UNK] , ? - ? ) [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [MASK] [UNK] [UNK] , [UNK] 토지 [UNK] . [UNK] [MASK] [MASK] [MASK] [MASK] [MASK] [MASK] [MASK] [MASK] [MASK] [MASK] [UNK] [UNK] [UNK] [MASK] [MASK] [UNK] . [UNK] [MASK] [MASK] [UNK] [UNK] ) [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [MASK] [MASK] [UNK] [UNK] [UNK] [UNK] [UNK] , [UNK] , [MASK] [MASK] [UNK] [UNK] [UNK] [UNK] ᄀ ##ᅩ , [UNK] [UNK] [UNK] [UNK] ( [UNK] 1 [MASK] ) [UNK] [UNK] [UNK] [MASK] [UNK] [UNK] [UNK] [UNK] [UNK] [MASK] [UNK] [UNK] . [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] 크라이 경성 ##렘 [MASK] [UNK] [UNK] [UNK] [UNK] 12 [UNK] ᄇ ##ᅮ ##ᄐ ##ᅥ [UNK] [UNK] [UNK] ##ᅡ ##ᄐ ##ᅡ ##ᄂ ##ᅡ [UNK] [UNK] [UNK] [UNK] . [UNK] 1914 [UNK] [UNK] [UNK] ᄀ ##ᅪ [MASK] [MASK] 올리 [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [MASK] [MASK] [MASK] [MASK] [UNK] 1 [UNK] ᄇ ##ᅮ ##ᄐ ##ᅥ [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [MASK] [UNK] . [MASK] [MASK] [UNK] ᄀ ##ᅩ ##ᄌ ##ᅵ ##ᄃ ##ᅩ [UNK] [UNK] [MASK] ᄇ ##ᅡ , [UNK] [UNK] [UNK] [UNK] ᄇ ##ᅩ ##ᄃ ##ᅡ [UNK] [UNK] [UNK] [UNK] ᄅ ##ᅩ ᄂ ##ᅡ ##ᄐ ##ᅡ ##ᄂ ##ᅡ [UNK] [UNK] [MASK] . [MASK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] . [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] [SEP]<|||||>> I'm using the pre-trained multilingual tokenizer to tokenize some Korean texts, but it seems like this tokenizer is unable to recognize any Korean text at all. > > For example, when running on all the Universal Dependency Korean treebanks, this tokenizer fails to tokenize (it produces '[UNK]') the following characters. I know some of them are not Korean, but most of them are. More confusingly, when I check the `vocab.txt` file for the [BERT-Base, Multilingual Cased](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) in the original repo (https://github.com/google-research/bert), it shows that Korean characters (I manually checked for '렬') are in the vocabulary. > > This makes me wonder whether there is any bug in the tokenizer provided in this PyTorch repo. I'm considering switching back to the tokenizer in the original code base. Will this create any compatibility issues? > > {'燄', '렬', '툇', '촌', '내', '윙', '쌈', '꿇', '톨', '葚', '솟', '힌', '래', '銎', '凊', '컸', '톤', '', '졍', '의', '슭', '옷', '챠', '囒', '뷘', '멤', '싱', '츠', '령', '겸', '댄', '휙', '醗', '돕', '톳', '던', '페', '띔', '짚', '락', '앙', '왜', '핼', '컫', '쿵', '왠', '기', '냇', '칼', '런', '험', '드', '궐', '얄', '찜', '렝', '빈', '후', '싫', '雎', '덕', '틀', '딴', '굶', '뜸', 'ㅑ', '', '손', '펴', '쏠', '튀', '화', '슘', '대', '왕', '잣', '딨', '듈', '뎀', '鑛', '칭', '젝', '뉴', '', '쑤', '작', '묘', '飮', '새', '쑥', '嘈', '캐', '笳', '랩', '', '히', '맵', '냈', '츰', '꿨', '딤', '밑', '낼', '訇', '뀌', '뻑', '혔', '겝', '얹', '랍', '', '옅', '징', '권', '욱', '탭', '', '흩', '벳', '밈', '굴', '연', '엑', '綉', '숯', '무', '셋', '벌', '텃', '클', '쫑', '덟', '쳬', '멎', '칵', '팅', '뜨', '땄', '㎡', '끽', '뿍', '담', '펭', '쉰', '있', '쏘', '탤', '濞', '치', '옵', '잃', '언', '빽', '짠', '넙', '깆', '콤', '러', '픽', '푸', '회', '낍', '스', '쩨', '진', '풋', '횡', '탑', '솝', '딕', '영', '볐', '씹', '뒷', '퀼', '렸', '별', '옇', '얘', '謔', 'Ↄ', '팎', '긋', '뽐', '현', '켠', '룻', '소', '몇', '쨌', '벨', '쩐', '붐', '트', '떴', '', '줌', '꽤', '밭', '쓴', '매', '쳤', '뱃', '콰', '먕', '봤', '', '베', '첩', '년', '짐', '뻤', '괭', '함', '鍝', '도', '폭', '꾸', '뜩', '괞', '닦', '몹', '', '팰', '콕', '떻', '킬', '강', '찡', '궂', '꿉', '빴', '즈', '데', '껑', '엘', '약', '천', '씻', '', '㈜', '롯', '털', '야', '혈', '펀', '', '', '뺐', '繽', '닙', '쪽', '행', '넉', '휩', '퀀', '쇠', '셑', '댑', '얽', '질', '뻬', '', '덥', '骯', '뀔', '룩', '팀', '愙', '괜', '몽', '냐', '씬', '서', '또', '썬', '절', '갱', 'ㅊ', '왼', '펠', '놈', '쯩', '름', '토', '받', '璣', '咧', '츄', '깃', '헬', '잇', '덜', '곡', '것', '겹', '지', '檣', '억', '셀', '픔', '륙', '돈', 'ㅜ', '믈', '멈', '빗', '을', '둘', '탱', '얇', '식', '', '틱', '똑', '줄', '궤', '잼', '착', '廝', '枸', '볏', '샴', '疙', '飈', '', '댁', '날', '킵', '동', '룸', '', '헐', '부', '웬', '램', '윗', '숍', '齲', '빼', '빡', '혐', '뮌', '너', '란', '땠', '개', '이', '끔', '麩', '표', '훈', '간', '바', '옆', '틸', '탄', '살', 'ʈ', '꽂', '벙', '뱅', '닭', '멀', '줍', '✌', '짊', '葯', '쨍', '쥔', '않', '술', '육', '햇', '텁', '뵈', '쿄', '', '웅', '막', '낮', '브', '펼', '색', '갰', '께', '음', '쐬', '숱', '빛', '텝', '젠', '준', '키', '왓', '팝', '순', '떳', '번', '팡', '핫', '팬', '畹', '깍', '슷', '겋', '위', '꺽', '껀', '낸', '숴', '잠', '휴', '訐', '貮', '티', '늪', '깔', '涮', '촐', '좀', '쭐', '턴', '같', '먼', '꾹', '쎌', '충', '땅', '희', '敉', '캠', '넋', '맑', '갇', '凈', '를', '쌌', '잿', '똥', '핵', '올', '윈', '접', '뮤', '귤', '갖', '촘', '웍', '속', '댕', '砒', '謇', '郫', '낄', '죽', '가', '깁', '뷸', '는', '쏜', '뉜', '릉', '쓰', '텼', '환', '皰', '띠', '즙', '자', '톱', '봄', '갠', '쉽', '항', '답', '넣', '옐', '머', '빔', '쬐', '듄', '썰', '로', '칩', '마', '뽂', '숀', '침', '찍', '캘', '노', '뼈', '', '놔', '특', '곁', '챙', '밥', '형', '팠', '섬', '焗', '닉', '관', '쉐', '잡', '슐', '뗏', '획', '추', '욘', '꾼', '좌', '쫀', '나', '킹', '뼛', '은', '랴', '깻', '눈', '욕', '하', '', '찢', '젊', '궁', '총', '놋', '둬', '뮬', '詝', '낭', '얌', '셨', '첫', '柢', '셉', '', '닥', '결', '겪', '', '⅛', '뾰', '뢰', '륨', '좇', '꼐', '옌', '둔', '', '금', '평', '면', '癤', '암', '허', '뱉', '銠', '꿔', '✔', '멘', '겟', '싯', '씁', '림', '볕', '넨', '휠', '☝', '仚', '쟈', '덤', '벅', '즘', '숙', '철', 'ㄹ', '재', '낚', '쉬', '롬', '휘', '렀', '荑', '득', '옮', '렙', '신', '랏', '헉', '푼', '났', '퐁', '쌥', '볶', '밝', '샀', '뽑', '춘', '앓', '곧', '찮', '믿', '늙', '녘', '쌤', '예', '밍', '요', '엣', '딱', '못', '뺨', '렉', '모', '쟁', '퍽', '鬪', '팜', 'ɟ', '틈', '체', '벚', '뛴', '훅', '윤', '짤', '샐', '궈', '댐', '룡', '덧', '깊', '賬', '택', '넬', '흰', '髖', '쁠', '투', '만', '창', '묽', '썼', '', '셩', '僊', '수', '豢', '癥', '삼', '춰', '렁', '퀴', '슛', '쌉', '', '묶', '정', '탓', '얏', '', '버', '쯔', '밉', '챈', '♩', '리', '떨', '료', '했', '잘', '뤄', '붕', '쩌', '탁', '혀', '繙', '껴', '박', '᾽', '뜀', '홉', '팔', '컨', '饃', '釩', '웰', '꼿', '薺', '縕', '깜', '케', '세', '鎰', '褂', '슌', '벵', '꼈', '카', '甍', '檨', '寗', '빵', '찔', '엊', '오', '통', '', '꽝', '니', '첨', '맺', '랑', '헙', '믄', '더', '쳐', '🏼', '옳', '쁨', '鈇', '취', '玕', '럽', '푹', '눠', '제', '디', '벡', '메', '걔', '訄', '滹', '眈', '☎', '물', '갓', '없', '', '송', '뭇', '멸', '빅', '늉', '난', '휼', '흙', '깥', '옥', '쫙', '례', '폰', '맏', '되', '훨', '헝', '짧', '쪼', '▶', '끓', '톡', '흥', '홈', '및', '콘', '빠', '등', '르', '깝', '밟', '펑', '끼', '잭', '와', '넷', '율', '찼', '酃', '균', '뿜', '악', '튈', '률', '', '걱', '과', '퀄', '튜', '끈', '늑', '렐', '쫄', '멋', '―', '탈', '憮', '잉', '醺', '粦', '맣', '큘', '曧', '쿼', '파', '둡', '탕', '抔', '솥', '척', '고', '', '숭', '람', '벗', '잤', 'ௌ', '옻', '叨', '듀', '늘', '염', '늦', '돌', '껏', '귄', '', '봉', '움', '뭄', '낯', '템', '멍', '랜', '', '헌', '샹', '압', '몰', '㎢', '큼', '윽', '', '퀵', '덩', '잊', '眯', '피', '학', '넴', '갬', '녹', '출', '꼴', '', '퍈', '쎄', '굳', '외', '붓', '犂', '슴', '늬', '글', '룬', '탠', '듣', '엿', '칸', '뿌', '갸', '냄', '깡', '괴', '든', '불', '툰', '甪', '箬', '빳', '한', '晳', '킷', '쭉', '둑', '꿍', '쭈', '맙', '鷄', '뜯', '폄', '酩', '퓨', '조', '롤', '프', '뻔', '교', '쿨', '펜', '닯', '', '크', '뀐', '紈', '집', '땡', '놓', '훔', '즌', '헨', '녀', '켤', '따', '', '포', '땐', '랙', '콩', '합', '농', '', '전', '엥', '텄', '', '긍', '', '뮐', '슝', '꽃', '븐', '', '岦', '빤', '찐', '☺', '둥', '랬', '뚤', '깽', '뺏', '목', '슨', '웨', '굵', '칠', '뜻', '려', '산', '끕', '커', '튿', '단', '쌀', '햄', '牾', '켰', '륜', '밤', '골', '익', '흉', '뷔', '랄', '뱀', '숨', '돔', '붙', '찧', '둠', '갔', '럿', '져', 'ʂ', '할', '존', '챌', '늠', '픈', '뷰', '댈', '갑', '컹', '쪄', '맜', '騏', '칡', '餮', '앎', '누', '굿', '幪', '멕', '백', '곤', '싸', '꼬', '헷', '펄', '럴', '饕', '뚫', '芘', 'ᆢ', '젯', '릴', '북', '', '룹', '핀', '옛', '병', '낟', '품', '짱', '겼', '쉴', '앳', '닳', '쾌', '힐', '경', '짓', '델', '酊', '턱', '끌', '횃', '호', '씀', '켯', '‒', '컬', '밋', '인', '향', '렷', '蚺', '몸', '❄', '놀', '탯', '쏭', '왈', '점', '팟', '', '써', '념', '뒬', '혹', '폐', '격', '앨', '본', '듭', '엷', '헴', '루', '콜', '묏', '', '큰', '샛', '았', '돗', '퓌', '장', '깎', '훼', '엾', '맞', '윌', '윷', '졔', '꿰', '쩍', '양', '킨', '', '줬', '왔', '볍', '엌', '샤', '£', '삶', '힘', '꽁', '길', '급', '럭', '엮', '瘩', '땀', '구', '쏟', '몫', '쫌', '✓', '으', '딛', '룰', '놉', '력', '齬', '뒀', '플', '꿩', '확', '졌', 'ㅇ', '㏊', 'ɖ', '넝', '빌', '여', '믹', '삭', '뽕', '뽈', '楂', '킴', '었', '캄', '사', '웠', '청', '뀨', '셈', '말', '넜', '', '맬', '삐', '紜', '챔', '', '논', '쥬', '딧', '쁘', '윔', '들', '뛸', '춧', '좋', '풍', '퓰', '뙤', '흽', '튼', '뮈', '끗', '공', '짖', '볼', '웃', '블', '랫', '뚜', '굉', '묵', '꿀', '톰', '객', '떤', '낙', '찾', '코', '셸', '녔', '원', '넥', '뗀', '펫', '셧', '袥', '羱', '심', '즉', 'ㅎ', '앤', '뜬', '냥', '塍', '쿠', '캔', '숲', '핏', '뭐', '댓', '먹', '얻', '엠', '됨', '띄', '뗄', '봇', '듐', '칫', '두', '롱', '능', '측', '崞', '융', '켓', '딩', '효', '㎝', '거', '샘', '변', '립', '맹', '괄', '쿰', '팸', '판', '앞', 'ㄷ', '밖', '켜', '밌', '', '맸', '', '임', '습', 'Ⓣ', '팽', '뻥', '릭', '높', '성', '짭', '첼', '덫', '슬', '迢', '월', '견', '미', '값', '터', '촉', '근', '옴', '김', '끝', '㎞', '센', '쏴', '呔', '엇', '떡', '뿔', '넛', '승', '섰', '곳', '직', '텅', '륭', '법', '잖', '', '겠', '곽', '용', '죠', '닿', '뭍', '', '생', '액', 'ㅁ', '탬', '뉘', '흡', '썽', '쟝', '♬', '낫', '폼', '층', '곬', '', '칙', '홀', '얕', '낡', '퇴', '臿', '춤', '꿋', '알', '잎', '략', '맡', '', '엔', '뫼', '우', '편', '팥', '였', '蛰', '괘', '鵪', '틴', '뇨', '녕', 'ৌ', '규', '튤', '흘', '', '췄', '딪', '郪', '유', '썹', '럼', '워', '선', '그', '씩', '셜', '떠', '캥', '갚', '뭡', '찰', '겁', '샌', '', '좁', '뇌', '끊', '계', '렘', '협', '밸', '멜', '안', '배', '솜', '귐', '챗', '셔', '꼼', '앗', '친', '꼽', '삿', '시', '각', '잽', '뎠', '울', '필', '', '찬', '쭙', '띤', '솔', '타', '꽹', '축', '밧', '騁', '텨', '완', '봐', '종', '僴', '箚', '발', '딸', '뗐', '깅', '摑', '십', '껄', '큽', '', '욤', '앉', '섯', '놨', '낳', '춥', '굼', '鬢', '독', '첸', '젼', '때', '堈', '錛', 'ɳ', '됩', '테', '죕', '잴', '보', '녁', '뺀', '쯤', '업', '렴', '릿', '懊', '잔', '숫', '웹', '뭉', '컴', '싶', '훌', '흑', '툴', '귓', '닻', '최', '❤', '빚', '겅', '뒹', '벽', '주', '쨋', '읽', '넌', '닮', '싼', '톈', '겨', '쐰', '냅', '듬', '째', '혼', '빻', '꺾', '므', '뚝', '꼭', '琺', '긁', '앵', '웸', '삽', '쥘', '납', '걷', '짰', '넓', '쓸', '걀', '헹', '쾰', '맘', '돋', '콸', '뛰', '걸', '분', 'ㅐ', '챘', '설', '砝', '갯', '룽', '온', '켄', '섭', '쫓', '쌩', '틋', '', '열', '퍼', 'ㅅ', '뒤', '채', '맨', '牘', '샵', '헤', '봅', '곯', '잌', '처', '굽', '극', '촨', '렌', '뎅', '폈', '응', '됐', '비', '차', '랭', '닐', '玷', '증', '섞', '컵', '량', '멩', '瓠', '졸', '녜', '빙', '딘', '썩', '눌', '잰', '뤼', '흠', '梃', '겐', '쑹', '남', '坮', '쩡', '묻', '롭', '췌', '겜', '국', '씨', '씽', '즐', '땟', '많', '덴', '氚', '쇼', '韆', '맴', '깬', '횟', '꺼', '렵', '얼', '반', '낀', '감', '榫', '줘', '죄', '얗', '린', '슈', 'ㅍ', '석', '님', '겉', '방', '벤', '☀', '팍', '꽉', '튕', '뻗', '쩔', '힙', '갈', '역', '듯', '邗', '컥', '牴', '돼', '셍', '붉', '흄', '₴', '', '젖', '달', '황', '홋', '눅', '퀘', '躓', '낱', '찌', '군', '촬', '', '느', '뻘', '눔', '갤', '헛', '입', '혁', '', '엎', '젓', '컷', '몬', '록', '냑', '아', '', '嚨', '애', '릅', '짝', '裋', '짜', '큐', '씌', '덮', '얀', '뜰', '론', '쌍', '앱', '쩝', '눕', '팩', '찹', '혜', '며', '텔', '뮨', '네', '싣', '샷', '곱', '', '빨', '헥', '딜', '옹', '흔', '킥', '태', '뿐', '에', '꾀', '껍', '겔', '끄', '퉁', '렇', '㎍', '닌', '샅', '싹', '', '륵', '縈', '흐', '짙', '뀜', '명', '핸', '', '뻐', '패', '짬', '범', '류', '쿤', '‐', '', '뎌', '될', '뚱', '셰', '쁜', '벼', '밀', '꿈', 'ㅋ', '다', '상', '라', '엽', '鞦', '舀', '닫', '쫘', '운', 'ㄴ', '어', '츌', '긴', '젤', '맥', '캉', '풀', '댔', '읍', '氘', '련', '훗', '쌓', '해', '곰', '뭘', '망', '저', '깨', '건', '당', '뽀', '섹', '활', '샨', '랗', '밴', '릎', '캡', '嫘', '널', '핍', '嘮', '앴', '酆', '', '중', '詼', '책', '엉', '된', 'ㅓ', '켈', '툼', '까', '실', '젭', '랐', '봣', '일', '닷', '초', '콧', '쥐', '렛', '櫾', '핑', '漶', '썸', '광', '넘', '떼', '탐', '곶', '쩜', '적', '쇄', '폴', '맛', '링', '레', '복', '띈', '늄', '텍', '뭔', '닝', '愨', '콥', '꿎', '참', '족', '른', '훑', '읊', '엄', '', '섀', '게', '펙', '삯', '맷', '烺', '검', '귀', '텐', '껌', '멧', '잦', '갗', '민', '찻', '긔', '뽁', '겊', '갉', '', '션', '츨', '릇', '눴', '깐', '껭', '탔', '닛', '문', '홍', '섣', '낌', '냉', '쉼', '撘', '띨', '', '숟'} Could you resolve your problem? I have the same problem with 'bert-base-multilingual-cased' on Korean corpus.
transformers
435
closed
Fixes to the TensorFlow conversion tool
This PR contains a small fix to the script which converts TensorFlow weights to PyTorch weights. The related issues are #50, #306, etc. Thanks for all of the open source code you've been putting out in this domain, it has been incredibly helpful to me and my team.
04-01-2019 19:20:50
04-01-2019 19:20:50
Yes! Thanks @marpaia!
transformers
434
closed
Model not training at all in Google Colab
Hi! Thanks for your help. I am initiating training in the following way in a Colab notebook with GPU acceleration (with very small train batch size and max seq length to prove I'm not getting out of memory problems!): !pip install pytorch-pretrained-bert !rm -rf bert_output !mkdir bert_output !python ./pytorch-pretrained-BERT/examples/run_squad.py --bert_model bert-base-multilingual-cased --output_dir bert_output --train_file squad_20_train.json --predict_file squad_20_dev.json --do_train --do_predict --train_batch_size 64 --max_seq_length 64 --version_2_with_negative --fp16 I have cloned the pytorch-pretrained-bert repository and installed the library. I have also installed apex. However, training doesn't even get started - the last thing the verbose logging produces is: 04/01/2019 14:36:45 - INFO - __main__ - tokens: [CLS] When did Bey ##once take a hi ##atus in her career and take control of her management ? [SEP] . Her critically acclaimed fifth studio album , Beyoncé ( 2013 ) , was distinguished from previous releases by its experimental production and exploration of dark ##er themes . [SEP] 04/01/2019 14:36:45 - INFO - __main__ - token_to_orig_map: 20:137 21:138 22:139 23:140 24:141 25:142 26:143 27:143 28:144 29:145 30:145 31:145 32:145 33:146 34:147 35:148 36:149 37:150 38:151 39:152 40:153 41:154 42:155 43:156 44:157 45:158 46:158 47:159 48:159 04/01/2019 14:36:45 - INFO - __main__ - token_is_max_context: 20:True 21:True 22:True 23:True 24:True 25:True 26:True 27:True 28:True 29:True 30:True 31:True 32:True 33:True 34:True 35:True 36:True 37:True 38:True 39:True 40:True 41:True 42:True 43:True 44:True 45:True 46:True 47:True 48:True 04/01/2019 14:36:45 - INFO - __main__ - input_ids: 101 12242 12172 40344 104316 13574 169 11520 26311 10106 10485 13021 10111 13574 12608 10108 10485 17150 136 102 119 13229 108889 87680 22237 13093 10606 117 54106 113 10207 114 117 10134 45233 10188 16741 45906 10155 10474 34176 12116 10111 61326 10108 25100 10165 48462 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 04/01/2019 14:36:45 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 04/01/2019 14:36:45 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 04/01/2019 14:36:45 - INFO - __main__ - start_position: 0 04/01/2019 14:36:45 - INFO - __main__ - end_position: 0 04/01/2019 14:36:45 - INFO - __main__ - answer: [CLS] So, it seems as though the notebook is preprocessing the data OK, but then it hangs here and ultimately terminates the process with a ^C (not from me, but something internal). Any idea what the problem might be here? Thanks again! Nicholas
04-01-2019 14:39:11
04-01-2019 14:39:11
Try train_batch_size =1. Alternatively I propose to finetune with the tensorflow model using colabs TPUs as these have far more memory.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
433
closed
how to do the pre training the model form scratch?
for example, use sample_text.txt
04-01-2019 13:57:23
04-01-2019 13:57:23
``` python3 examples/lm_finetuning/simple_lm_finetuning.py --train_corpus sample_text.txt --bert_model bert-base-uncased --do_lower_case --output_dir finetuned_lm/ ``` In addition, you can refer to #385 <|||||>Yes let's keep a single issue on this. Closing in favor of #385.
transformers
432
closed
Predictions from BertForSequenceClassification model keep changing across runs
I used the code in run_classifier.py to train a model for intent detection which is a multi-class classification problem. After training the model, when I used it for prediction, I found the predictions to be changing from one run to another. I'm trying to understand the reason for the same and how I can avoid this behavior. Let's say that model is the fine-tuned sequence classification model. I get different results (logits) every time I run the following: logits = model1(input_ids, segment_ids, input_mask, labels=None) Why is that so?
04-01-2019 11:40:58
04-01-2019 11:40:58
You probably forgot to deactivate the DropOut modules with `model.eval()`<|||||>Oh ok, I get it. That's helpful. Thanks!<|||||>I tried that and now I seem to be getting the same predictions for any input. In other words, the logits don't change with change in input. Not sure why this is happening?<|||||>Maybe a bug in your code? If you can share a simple self-contained example exhibiting the behavior we can have a look.<|||||>That's unlikely as I tried the same code with other problems and got reasonable results. The problems on which I tried this code include MRPC task, sentiment prediction on IMDB dataset and intent detection on smalltalk data. The results are logical and reasonable. The dataset on which I get this behaviour reported above has about 20 examples for each of the 4 intents viz. restaurant_search(0), booking_table(1), greet(2) and thanks(3). So, I was thinking if this could be due to the data having less number of examples. The sequence classifier class is basically bert model + a single hidden layer neural network with output layer as the number of labels. I feel that there's not enough data to train the last (classifier) layer. Here's my test data: test_df = pd.DataFrame({'text': ["hey", "indian hotels"], 'label': [2, 0]}) I print the logits for each of the 2 test samples which are as follows: tensor([[ 28.9354, 28.3292, 20.1560, -20.7804]]) tensor([[ 28.9354, 28.3292, 20.1560, -20.7804]]) The logits tensor doesn't change for any input text. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi @varun-nathan, were you ever able to solve this? I'm having a very similar issue with ReformerForSequenceClassification.<|||||>Hi @varun-nathan and @jstremme, were either of you able to find the issue? I'm experiencing the same thing with ElectraForSequenceClassification!<|||||>> Hi @varun-nathan and @jstremme, were either of you able to find the issue? I'm experiencing the same thing with ElectraForSequenceClassification! Hi @mollha, I managed to solve this but can't remember exactly what I did. Did you try `model.eval()`? Also, make sure you are training on more than just a few sample records and using a large enough model in terms of neurons per layer etc. <|||||>Thanks! I was already using model.eval(), but my dataset size was too small (around 1000). After increasing to 15000 I am getting much better results.<|||||>Excellent!
transformers
431
closed
How to fine tune Transformer-XL on own dataset?
I have my own dataset. What format of data I need to fine tune TransformerXL for text generation?
04-01-2019 11:11:52
04-01-2019 11:11:52
Hi, You can tokenize your data with the Transformer-XL tokenizer and use it to train your model.<|||||>I understand it. But I want to fine tune pretrained model<|||||>What is the difference? Can you start by trying to do what you would do normally to fine-tune a pytorch model and then if you encounter an issue give the error messages back here so we can help?<|||||>I found example in readme And my question - can I fine tune model using this script with my own data? ` python run_openai_gpt.py \ --model_name openai-gpt \ --do_train \ --do_eval \ --train_dataset $ROC_STORIES_DIR/cloze_test_val__spring2016\ -\ cloze_test_ALL_val.csv \ --eval_dataset $ROC_STORIES_DIR/cloze_test_test__spring2016\ -\ cloze_test_ALL_test.csv \ --output_dir ../log \ --train_batch_size 16 \ ` Example for gpt, but I guess it is the same for TransformerXL<|||||>Hi, you will likely need to adapt this example since Transformer-XL uses memory cells but there is no ready to use example for fine-tuning Transformer-XL in the repo unfortunately (and I don't plan to add one in the near future). If you want to give it a try feel free to ask more specific questions here. I would advise to start by reading carefully the author's paper to be sure you have a good understanding of the model. You can also probably start from the authors' own PyTorch training script (which I simplified to make the evaluation script in the present repo).<|||||>Is it possible to use the transformer with non-tokenized data @thomwolf? I know the Transformer is built for language modeling, but I would like to take advantage of the self-attention aspect of the Transformer's self-attention mechanism to model continuous data, and wonder how doable that sounds to you? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
430
closed
Fix typo in example code
Modify 'unambigiously' to 'unambiguously'
03-30-2019 17:21:39
03-30-2019 17:21:39
transformers
429
closed
GPT2Tokenizer <|endoftext|>
I am confused as to how the GPT2Tokenizer is intended to be used. It looks like `GPT2Tokenizer.encode` doesn't always take the byte pair encoding into account -- is this intentional? ``` from pytorch_pretrained_bert import GPT2Tokenizer tok = GPT2Tokenizer.from_pretrained("gpt2") print(tok.encode("<|endoftext|>")) print(tok.encoder["<|endoftext|>"]) ``` Running the above gives: ``` [27, 91, 437, 1659, 5239, 91, 29] 50256 ``` I would have (perhaps naively) expected that `tok.encode` gives `[50256]`.
03-30-2019 14:56:28
03-30-2019 14:56:28
`"<|endoftext|>"` is a *special token* that is not intended to be feed through the tokenizer but added to the indices list after the tokenization process (see for example the way it is used in the short example [`run_gpt2.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_gpt2.py#L98)). It's common practice to distinguish the way we write *special tokens* for the constrains/behavior of the tokenizers as they (i) become unnecessary complex when we add exception rules for special tokens and (ii) are often designed externally (ex when you use SpaCy's tokenizer). The simplest way to do that is just to add the special tokens after the encoding process like it's done for GPT-2.<|||||>That makes sense! Is this in the documentation anywhere? I didn't come across it. Regardless, thanks for the clarification.<|||||>@thomwolf I think after the new repo migration, you guys made '<|endoftext|>' as first class citizen of GPT2 tokenizer. So now we can just have base inputs with this token and it'd work fine with tokenizer and model, right?<|||||>Note that in the new version of Transformer, the behavior of the GPT2 tokenizer changed. ```python text = "<|endoftext|> machine learning using PyTorch and TensorFlow" tokenizer_folder = "gpt2" tokenizer = GPT2Tokenizer.from_pretrained(tokenizer_folder) encoded_ids = tokenizer.encode(text) print(encoded_ids) print("################### when tokenizing the whole sentence ######################") print(encoded_ids) for id in encoded_ids: print( f"""id: {id:<7}, original_token: {"'" + tokenizer.decoder[id] + "'":<10}, ignore_special_token: '{tokenizer.decode(id)}'""") # endfor ``` output: ```text ################### when tokenizing the whole sentence ###################### [50256, 4572, 4673, 1262, 9485, 15884, 354, 290, 309, 22854, 37535] id: 50256 , original_token: '<|endoftext|>', ignore_special_token: '<|endoftext|>' id: 4572 , original_token: 'Ġmachine', ignore_special_token: ' machine' id: 4673 , original_token: 'Ġlearning', ignore_special_token: ' learning' id: 1262 , original_token: 'Ġusing' , ignore_special_token: ' using' id: 9485 , original_token: 'ĠPy' , ignore_special_token: ' Py' id: 15884 , original_token: 'Tor' , ignore_special_token: 'Tor' id: 354 , original_token: 'ch' , ignore_special_token: 'ch' id: 290 , original_token: 'Ġand' , ignore_special_token: ' and' id: 309 , original_token: 'ĠT' , ignore_special_token: ' T' id: 22854 , original_token: 'ensor' , ignore_special_token: 'ensor' id: 37535 , original_token: 'Flow' , ignore_special_token: 'Flow' ```
transformers
428
closed
Cannot find Synthetic self-training in this repository.
The SQuAD leader board's (https://rajpurkar.github.io/SQuAD-explorer/) 3rd highest scored model uses 'synthetic self-training'. There is a PDF explaining it: https://nlp.stanford.edu/seminar/details/jdevlin.pdf?fbclid=IwAR2TBFCJOeZ9cGhxB-z5cJJ17vHN4W25oWsjI8NqJoTEmlYIYEKG7oh4tlY but I have found no such model within this repository. On the leader board website, it shows me a link to this repository. Is the synthetic self-training in this repository? (If so, please show me the link to the code) Is it private? Will it be released? Thanks, Dogy06 By the way: I checked both the Pytorch and Tensorflow version of the code., I posted the same issue here:https://github.com/google-research/bert/issues/532
03-30-2019 11:12:04
03-30-2019 11:12:04
Hi, indeed there is no Synthetic self-training in this repository, and the SQuAD leaderboard website actually refers to the Tensorflow repository so I'll close this issue.<|||||>Will you be adding synthetic self training though?
transformers
427
closed
fix sample_doc
If the value of rand_end is returned from the randint function, the value of sampled_doc_index that matches current_idx is returned from searchsorted. example: cumsum_max = {int64} 30 doc_cumsum = {ndarray} [ 5 7 11 19 30] doc_lengths = {list} <class 'list'>: [5, 2, 4, 8, 11] if current_idx = 1, rand_start = 7 rand_end = 35 sentence_index = randint(7, 35) % cumsum_max if randint return 35, sentence_index becomes 5. if sentence_index is 5, np.searchsorted returns 1 equal to current_index.
03-30-2019 05:58:09
03-30-2019 05:58:09
Good catch, thanks!<|||||>Gah. I meant to use `randrange()`, but this fix is equivalent!
transformers
426
closed
instantiate loss_fct once
03-29-2019 13:26:34
03-29-2019 13:26:34
Thanks for the PR but I think it's fine like it is now (slightly easier to read and debug).
transformers
425
closed
fix lm_finetuning's link
03-29-2019 08:09:46
03-29-2019 08:09:46
Thanks!
transformers
424
closed
Difference between base and large tokenizer?
I understand that a cased tokenizer and an uncased one are surely different because their vocabs are different in casing, but how does a base tokenizer different from a large tokenizer? Does a large tokenizer have a larger vocab?
03-28-2019 18:41:19
03-28-2019 18:41:19
I haven't looked in the details of the vocabularies for each model. If you investigate this question, be sure to share the results here, it may interest others as well!<|||||>I did a diff on the two vocabulary files and there is no difference. As long as you use the uncased version at least. I haven't investigated others.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I can validate @mhattingpete 's research. I tokenized a big collection of text with the uncased tokenizer from both the base and the large model and both tokenizations are identical.
transformers
423
closed
making unconditional generation work
The unconditional generation works now but if the seed is fixed, the sample is the same every time. n_samples > 1 will give different samples though. I am giving the start token as '<|endoftext|>' for the unconditional generation.
03-28-2019 17:18:40
03-28-2019 17:18:40
Hi thanks for the PR. I think we still need to clean up the example a little more indeed. These lines should be taken care off: ```python while not args.unconditional: if not args.unconditional: ``` I will see if I can find time to refactor it next week or you can update your PR if you want to fix this too.<|||||>I have further cleaned the redundant lines and some conditions.<|||||>Great, thanks @dhanajitb!
transformers
422
closed
BertForTokenClassification for NER, mask labels
I'm trying to do Named Entity Recognition with BertForTokenClassification. Say I have 10 words with 10 labels, after WordPiece tokenization I get 15 tokens and I assign them labels, "X" for pieces of words like (##ing). In the original paper https://arxiv.org/pdf/1810.04805.pdf in 4.3 section is said that we don't make predictions for such pieces or we can simply do not care about them and not take derivatives. Does this functionality currently exist in BertForTokenClassification or BertModel?
03-28-2019 17:11:21
03-28-2019 17:11:21
Sequence tagging is explained here: https://github.com/huggingface/pytorch-pretrained-BERT/issues/64#issuecomment-443703063<|||||>Yes, this is the relevant issue on this topic. I'll close this issue in favor of #64.
transformers
421
closed
pytorch model to tensorflow checkpoint
How to convert a pytorch_model.bin to tensorflow checkpoint?
03-28-2019 15:13:52
03-28-2019 15:13:52
Hi, there is no script to do that currently. I don't plan to add this feature in the short term but I would be happy to welcome a PR on that.<|||||>A PR for this would be great. It would allow a simple deployment via Han's bert-as-service https://github.com/hanxiao/bert-as-service/<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
420
closed
Advantage of BertAdam over Adam?
I have implemented BERT, taking the output of [CLS] and feeding that to a linear layer on top to do regression. I froze the embedding layers of BERT, though. I was using the standard Adam optimizer and did not run into any issues. When and/or why should one use BERTAdam? And, in a set-up like mine, would you use BERTAdam for BERT, and regular Adam for the rest of the whole model?
03-28-2019 15:03:16
03-28-2019 15:03:16
Some explanation is given here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/694e2117f33d752ae89542e70b84533c52cb9142/README.md#optimizers `BertAdam` is a `torch.optimizer` adapted to be closer to the optimizer used in the TensorFlow implementation of Bert. The differences with PyTorch `Adam` optimizer are the following: * `BertAdam` implements weight decay fix, * `BertAdam` doesn't compensate for bias as in the regular `Adam` optimizer. <|||||>@stefan-it Thanks for the link. These improvements are not the same as the suggested AdamW improvements, I assume?<|||||>Yes they are the same. `BertAdam` implements AdamW and in addition doesn't compensate for the bias (I don't know why the Google team decided to do that but that's what they did). In most case we have been using standard Adam with good performances (example by using NVIDIA's apex fusedAdam as optimizer) so you probably shouldn't worry too much about the differences between the two. We've incorporated `BertAdam`mostly to be able to exactly reproduce the behavior of the TensorFlow implementation.<|||||>@thomwolf Thanks, that explanation really helps. I have been using standard Adam with good results and BertAdam didn't improve that. So in my particular case it may not have been useful. Closing this, as my question has been answered.<|||||>For reference for future visits, recent research suggests that the omission of the bias compensation in BERTAdam is one of the sources of instability in finetuning: * [On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines](https://www.lsv.uni-saarland.de/wp-content/publications/2020/On_the_Stability_of_Fine-tuning_BERT_preprint.pdf) * [Revisiting Few-sample BERT Fine-tuning](https://arxiv.org/abs/2006.05987)
transformers
419
closed
bug in examples/run_squad.py line 88 & 90
if self.start_position: s += ", end_position: %d" % (self.end_position) if self.start_position: s += ", is_impossible: %r" % (self.is_impossible)
03-28-2019 09:08:20
03-28-2019 09:08:20
Indeed, thanks
transformers
418
closed
can I fine-tuning pretrained gpt2 model on my corpus?
03-28-2019 07:58:42
03-28-2019 07:58:42
transformers
417
closed
Is there any pre-training example code?
I want to pre-train BERT with my own data. [#124](https://github.com/huggingface/pytorch-pretrained-BERT/pull/124), [#170](https://github.com/huggingface/pytorch-pretrained-BERT/issues/170) say the model can pre-training. But I can't find pre-training example code. Is there any pre-training example code? If there is no example code, I wonder [modeling.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py)'s BertForPreTraining is enought to pre-tranining.
03-28-2019 06:54:15
03-28-2019 06:54:15
Hi @tmchojo there are now several detailed examples [here](https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/examples/lm_finetuning) thanks to @Rocketknight1 and #392 <|||||>Thanks. But it still needs pre-trained model. My data is another domain (not English or Chinese), so I can't use pre-trained model. Do I have to use [Google's](https://github.com/google-research/bert) to pre-train?<|||||>I'm also interested in pre-training. Did you make any progress on this, @tmchojo?<|||||>There is an example provided [here](https://huggingface.co/blog/how-to-train) with colab (this is for pre-training from scratch), however, I'm still trying to figure the continual pre-training and not pretraining from scratch (using a pre-trained model as initial checkpoint). <|||||>Hope https://github.com/huggingface/transformers/tree/master/examples/pytorch helps! The language modelling notebook trains a gpt model. This notebook also gives an idea on labelling the dataset for the LM task.
transformers
416
closed
Distributed Training Gets Stuck
Hi, When I freeze the BERT layers, distributed training works just fine. However, when I unfreeze the BERT layers, the first node continues training, and all other nodes wait on the training step with 100% GPU utilization on the first GPU. Is this expected behavior, or am I doing something wrong? I'm using PyTorch 1.0.1post2, and 8 GTX 1080 GPUs per machine. Best, Moin
03-28-2019 00:40:50
03-28-2019 00:40:50
Hi, One possible cause of this behavior may be the way you are freezing your parameters. PyTorch's `DistributedDataParallel` is a rather sensitive beast as you can juge by the number of warnings in [its doc](https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel). Two features are especially important to keep in mind: - *You should never try to change your model’s parameters after wrapping up your model with DistributedDataParallel* or unexpected behaviors can happen, since some parameters’ gradient reduction functions might not get called. - Another that is probably not important for you but you should be award of: Constructor, forward method, and differentiation of the output (backward pass) are *distributed synchronization point*. So maybe you are freezing your parameters after having wrapped up your model with DistributedDataParallel?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
415
closed
For sequence classification, is this model using the wrong token?
According to the BERT paper, we want to use the weights for the `[CLS]` token, which – as far as I understand – would be the first hidden output here, not the last? i.e. shouldn't this be `encoded_layers[0]` below? https://github.com/huggingface/pytorch-pretrained-BERT/blob/f7c9dc8c998395d2ad9edbf0fd6fa072f03cc667/pytorch_pretrained_bert/modeling.py#L715-L719 @thomwolf Let me know if I'm missing something and if you're concerned about this breaking anything else and I can PR.
03-27-2019 19:32:56
03-27-2019 19:32:56
Never mind, I think I got confused. Is it that the pooler already takes care of that and encoded_layers represents the pooler output for each attention layer?<|||||>Hi Catalin, the content of the outputs (`encoded_layers` and `pooled_output`) is detailed in the readme [here](https://github.com/huggingface/pytorch-pretrained-BERT#1-bertmodel) and in the model docstring. `encoded_layers` contains the encoded-hidden-states at the end of each attention block (i.e. 12 full sequences for BERT-base, 24 for BERT-large), each encoded-hidden-state is a torch.FloatTensor of size [batch_size, sequence_length, hidden_size]. The Pooler take care of extracting the hidden state of the first token from the encoded hidden state of the full sequence (see [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/f7c9dc8c998395d2ad9edbf0fd6fa072f03cc667/pytorch_pretrained_bert/modeling.py#L413))
transformers
414
closed
Help with implementing strides into features for multi-label classifier
As you might know, BERT has a maximum wordpiece token sequence length of 512. The SQuAD example actually uses strides to account for this: https://github.com/google-research/bert/issues/27 I want to implement something like what Jacob Devlin described in that post for Kaushal's multi-label BERT classifier: https://medium.com/huggingface/multi-label-text-classification-using-bert-the-mighty-transformer-69714fa3fb3d What I did was basically just copy and edit a bit the doc_stride functions from the SQuAD example in huggingface into Kaushal's code: https://colab.research.google.com/drive/1aqcIdm2Pn2rvmWHvOSEMUTzw_CNmoseA However, this is excruciatingly slow, I don't know if it's an issue of stride length? Also, I'm not sure how to implement the thing about reshaping the combined minibatches and getting predictions from there. Anyone who has tried to adapt BERT for longer texts, could you please help? Thanks
03-27-2019 06:36:34
03-27-2019 06:36:34
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@alvin-leong did this make any progress? I am interested in doing something similar for the sequence classifier. <|||||>@alvin-leong @maxzzze , just wondering if there is any progress on this? Thanks.<|||||>No 😞 I ended up implementing several methods to grab different spans of length X (hyperparamter) from all over my documents which have lengths > 512.<|||||>Thanks @maxzzze . I am doing the same now, and just concerned about the answers from different spans. You may have right answer with low scores and wrong answer with high scores. Finger cross not. Not sure if scores from different spans is comparable?!<|||||>I modified https://github.com/kyzhouhzau/BERT-NER to implement strides https://github.com/anupamsingh610/bert_ner_stride Hope it helps :)<|||||>@alvin-leong Hi, I have you made any progress on this issue? I checked your notebook. What is the problem with that? Is just the train very slow?<|||||>Hi, we have implemented BERT binary classifier for longer texts [here](https://github.com/mim-solutions/bert_for_longer_texts).
transformers
413
closed
Bert Pretrained model has no modules nor parameters
"from pytorch_pretrained_bert.modeling import BertPreTrainedModel bert_model = BertPreTrainedModel.from_pretrained(pretrained_model_name_or_path='bert-base-uncased') bert_model.to(device)" returns: 2019-03-26 21:38:07,404 pytorch_pretrained_bert.modeling INFO loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /home/ubuntu/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 2019-03-26 21:38:07,405 pytorch_pretrained_bert.modeling INFO extracting archive file /home/ubuntu/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpkki3a7er 2019-03-26 21:38:11,106 pytorch_pretrained_bert.modeling INFO Model config { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 30522 } 2019-03-26 21:38:11,348 pytorch_pretrained_bert.modeling INFO Weights from pretrained model not used in BertPreTrainedModel: ['bert.embeddings.word_embeddings.weight', 'bert.embeddings.position_embeddings.weight', 'bert.embeddings.token_type_embeddings.weight', 'bert.encoder.layer.0.attention.self.query.weight', 'bert.encoder.layer.0.attention.self.query.bias', 'bert.encoder.layer.0.attention.self.key.weight', 'bert.encoder.layer.0.attention.self.key.bias', 'bert.encoder.layer.0.attention.self.value.weight', 'bert.encoder.layer.0.attention.self.value.bias', 'bert.encoder.layer.0.attention.output.dense.weight', 'bert.encoder.layer.0.attention.output.dense.bias', 'bert.encoder.layer.0.intermediate.dense.weight', 'bert.encoder.layer.0.intermediate.dense.bias', 'bert.encoder.layer.0.output.dense.weight', 'bert.encoder.layer.0.output.dense.bias', 'bert.encoder.layer.1.attention.self.query.weight', 'bert.encoder.layer.1.attention.self.query.bias', 'bert.encoder.layer.1.attention.self.key.weight', 'bert.encoder.layer.1.attention.self.key.bias', 'bert.encoder.layer.1.attention.self.value.weight', 'bert.encoder.layer.1.attention.self.value.bias', 'bert.encoder.layer.1.attention.output.dense.weight', 'bert.encoder.layer.1.attention.output.dense.bias', 'bert.encoder.layer.1.intermediate.dense.weight', 'bert.encoder.layer.1.intermediate.dense.bias', 'bert.encoder.layer.1.output.dense.weight', 'bert.encoder.layer.1.output.dense.bias', 'bert.encoder.layer.2.attention.self.query.weight', 'bert.encoder.layer.2.attention.self.query.bias', 'bert.encoder.layer.2.attention.self.key.weight', 'bert.encoder.layer.2.attention.self.key.bias', 'bert.encoder.layer.2.attention.self.value.weight', 'bert.encoder.layer.2.attention.self.value.bias', 'bert.encoder.layer.2.attention.output.dense.weight', 'bert.encoder.layer.2.attention.output.dense.bias', 'bert.encoder.layer.2.intermediate.dense.weight', 'bert.encoder.layer.2.intermediate.dense.bias', 'bert.encoder.layer.2.output.dense.weight', 'bert.encoder.layer.2.output.dense.bias', 'bert.encoder.layer.3.attention.self.query.weight', 'bert.encoder.layer.3.attention.self.query.bias', 'bert.encoder.layer.3.attention.self.key.weight', 'bert.encoder.layer.3.attention.self.key.bias', 'bert.encoder.layer.3.attention.self.value.weight', 'bert.encoder.layer.3.attention.self.value.bias', 'bert.encoder.layer.3.attention.output.dense.weight', 'bert.encoder.layer.3.attention.output.dense.bias', 'bert.encoder.layer.3.intermediate.dense.weight', 'bert.encoder.layer.3.intermediate.dense.bias', 'bert.encoder.layer.3.output.dense.weight', 'bert.encoder.layer.3.output.dense.bias', 'bert.encoder.layer.4.attention.self.query.weight', 'bert.encoder.layer.4.attention.self.query.bias', 'bert.encoder.layer.4.attention.self.key.weight', 'bert.encoder.layer.4.attention.self.key.bias', 'bert.encoder.layer.4.attention.self.value.weight', 'bert.encoder.layer.4.attention.self.value.bias', 'bert.encoder.layer.4.attention.output.dense.weight', 'bert.encoder.layer.4.attention.output.dense.bias', 'bert.encoder.layer.4.intermediate.dense.weight', 'bert.encoder.layer.4.intermediate.dense.bias', 'bert.encoder.layer.4.output.dense.weight', 'bert.encoder.layer.4.output.dense.bias', 'bert.encoder.layer.5.attention.self.query.weight', 'bert.encoder.layer.5.attention.self.query.bias', 'bert.encoder.layer.5.attention.self.key.weight', 'bert.encoder.layer.5.attention.self.key.bias', 'bert.encoder.layer.5.attention.self.value.weight', 'bert.encoder.layer.5.attention.self.value.bias', 'bert.encoder.layer.5.attention.output.dense.weight', 'bert.encoder.layer.5.attention.output.dense.bias', 'bert.encoder.layer.5.intermediate.dense.weight', 'bert.encoder.layer.5.intermediate.dense.bias', 'bert.encoder.layer.5.output.dense.weight', 'bert.encoder.layer.5.output.dense.bias', 'bert.encoder.layer.6.attention.self.query.weight', 'bert.encoder.layer.6.attention.self.query.bias', 'bert.encoder.layer.6.attention.self.key.weight', 'bert.encoder.layer.6.attention.self.key.bias', 'bert.encoder.layer.6.attention.self.value.weight', 'bert.encoder.layer.6.attention.self.value.bias', 'bert.encoder.layer.6.attention.output.dense.weight', 'bert.encoder.layer.6.attention.output.dense.bias', 'bert.encoder.layer.6.intermediate.dense.weight', 'bert.encoder.layer.6.intermediate.dense.bias', 'bert.encoder.layer.6.output.dense.weight', 'bert.encoder.layer.6.output.dense.bias', 'bert.encoder.layer.7.attention.self.query.weight', 'bert.encoder.layer.7.attention.self.query.bias', 'bert.encoder.layer.7.attention.self.key.weight', 'bert.encoder.layer.7.attention.self.key.bias', 'bert.encoder.layer.7.attention.self.value.weight', 'bert.encoder.layer.7.attention.self.value.bias', 'bert.encoder.layer.7.attention.output.dense.weight', 'bert.encoder.layer.7.attention.output.dense.bias', 'bert.encoder.layer.7.intermediate.dense.weight', 'bert.encoder.layer.7.intermediate.dense.bias', 'bert.encoder.layer.7.output.dense.weight', 'bert.encoder.layer.7.output.dense.bias', 'bert.encoder.layer.8.attention.self.query.weight', 'bert.encoder.layer.8.attention.self.query.bias', 'bert.encoder.layer.8.attention.self.key.weight', 'bert.encoder.layer.8.attention.self.key.bias', 'bert.encoder.layer.8.attention.self.value.weight', 'bert.encoder.layer.8.attention.self.value.bias', 'bert.encoder.layer.8.attention.output.dense.weight', 'bert.encoder.layer.8.attention.output.dense.bias', 'bert.encoder.layer.8.intermediate.dense.weight', 'bert.encoder.layer.8.intermediate.dense.bias', 'bert.encoder.layer.8.output.dense.weight', 'bert.encoder.layer.8.output.dense.bias', 'bert.encoder.layer.9.attention.self.query.weight', 'bert.encoder.layer.9.attention.self.query.bias', 'bert.encoder.layer.9.attention.self.key.weight', 'bert.encoder.layer.9.attention.self.key.bias', 'bert.encoder.layer.9.attention.self.value.weight', 'bert.encoder.layer.9.attention.self.value.bias', 'bert.encoder.layer.9.attention.output.dense.weight', 'bert.encoder.layer.9.attention.output.dense.bias', 'bert.encoder.layer.9.intermediate.dense.weight', 'bert.encoder.layer.9.intermediate.dense.bias', 'bert.encoder.layer.9.output.dense.weight', 'bert.encoder.layer.9.output.dense.bias', 'bert.encoder.layer.10.attention.self.query.weight', 'bert.encoder.layer.10.attention.self.query.bias', 'bert.encoder.layer.10.attention.self.key.weight', 'bert.encoder.layer.10.attention.self.key.bias', 'bert.encoder.layer.10.attention.self.value.weight', 'bert.encoder.layer.10.attention.self.value.bias', 'bert.encoder.layer.10.attention.output.dense.weight', 'bert.encoder.layer.10.attention.output.dense.bias', 'bert.encoder.layer.10.intermediate.dense.weight', 'bert.encoder.layer.10.intermediate.dense.bias', 'bert.encoder.layer.10.output.dense.weight', 'bert.encoder.layer.10.output.dense.bias', 'bert.encoder.layer.11.attention.self.query.weight', 'bert.encoder.layer.11.attention.self.query.bias', 'bert.encoder.layer.11.attention.self.key.weight', 'bert.encoder.layer.11.attention.self.key.bias', 'bert.encoder.layer.11.attention.self.value.weight', 'bert.encoder.layer.11.attention.self.value.bias', 'bert.encoder.layer.11.attention.output.dense.weight', 'bert.encoder.layer.11.attention.output.dense.bias', 'bert.encoder.layer.11.intermediate.dense.weight', 'bert.encoder.layer.11.intermediate.dense.bias', 'bert.encoder.layer.11.output.dense.weight', 'bert.encoder.layer.11.output.dense.bias', 'bert.pooler.dense.weight', 'bert.pooler.dense.bias', 'bert.embeddings.LayerNorm.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.4.output.LayerNorm.weight', 'bert.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.layer.6.output.LayerNorm.weight', 'bert.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'bert.encoder.layer.7.output.LayerNorm.weight', 'bert.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.output.LayerNorm.weight', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.9.output.LayerNorm.weight', 'bert.encoder.layer.9.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.output.LayerNorm.weight', 'bert.encoder.layer.11.output.LayerNorm.bias'] BertPreTrainedModel() ---- " #%% for key, val in bert_model._modules.items(): print(key) print(val) #%% for key, val in bert_model._parameters.items(): print(key) print(val) " returns: nothing
03-26-2019 21:51:14
03-26-2019 21:51:14
transformers
412
closed
Possible error in "pytorch-pretrained-BERT/examples/run_gpt2.py" unconditional
Hello, First of all thanks for offering us those great NLP implementations. I think there may be an error in the file pytorch-pretrained-BERT/examples/run_gpt2.py ![image](https://user-images.githubusercontent.com/1786870/55035530-94df8c80-4fee-11e9-90eb-bde5bcba7832.png) The way it is implemented if we do unconditional=True nothing happens. If you fix this line, don't forget to look at too ![image](https://user-images.githubusercontent.com/1786870/55035656-dd974580-4fee-11e9-85e4-b0aa050fa274.png) Thank you.
03-26-2019 21:45:31
03-26-2019 21:45:31
This should be fixed by #462.
transformers
411
closed
Why average the loss when training on multi-GPUs
Could anyone help to explain the motivation for this operation? https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L573-L574
03-26-2019 13:23:18
03-26-2019 13:23:18
Multi-GPU loss returns a tuple of losses with one loss for each GPU. We average them to get the full loss. See [this blog post](https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255) for more details.
transformers
410
closed
something wrong in example
![image](https://user-images.githubusercontent.com/42565075/54999939-5789f780-500c-11e9-958f-1c7b0b92a257.png) the segmentation is wrong。。
03-26-2019 13:15:19
03-26-2019 13:15:19
Do you have the latest `pytorch-pretrained-bert` ? ```python import pytorch_pretrained_bert pytorch_pretrained_bert.__version__ ```<|||||>Thank you so much!!!!!!!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
409
closed
Remove padding_idx from position_embeddings and token_type_embeddings
Because embedding vectors at 0th position of `position_embeddings` and `token_type_embeddings` have roles in the model (i.e., representing the first token and the token in the first sentence), these vectors should not be treated as padding vectors.
03-26-2019 13:04:38
03-26-2019 13:04:38
Indeed, thanks @ikuyamada!
transformers
408
closed
slow training speed even 20 steps
Hi, I am running run_lm_finetuning.py code on a 1 million sentences . It's taking more than 20 hours even for each epoch. where as run_pretraining.py code from google-research/bert takes very less time. What can be the reason? How to resolve this?
03-26-2019 09:05:07
03-26-2019 09:05:07
We have new lm_fintetuning scripts thanks to @Rocketknight1 PR #392. Maybe you can try these ones. They are in the `./examples/lm_finetuning/` folder.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
407
closed
AllenNLP TransformerXL
Has anyone done any work on wrapping up TransformerXL for AllenNLP?
03-25-2019 23:33:58
03-25-2019 23:33:58
Maybe you should ask in the AllenNLP repo as well?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
406
closed
error when trying to get embeddings after fine tuning
I have used run_lm_finetuning.py code on my domain specific corpus. Now I want to use the fine tuned model to get better embeddings. >>> import torch >>> config = modeling.BertConfig(attention_probs_dropout_prob=0.1, hidden_dropout_prob=0.1, hidden_size=768, initializer_range=0.02, intermediate_size=3072, max_position_embeddings=512, num_attention_heads=12, num_hidden_layers=12, vocab_size_or_config_json_file=30522) >>> model = modeling.BertEmbeddings(config) >>> model_state_dict = "/home/cloud/pytorch_bert/pytorch-pretrained-BERT-master/examples/models/pytorch_model.bin" >>> model.load_state_dict(torch.load(model_state_dict)) I got an error like Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/cloud/miniconda3/envs/tensorflow_gpu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for BertEmbeddings: Missing key(s) in state_dict: "word_embeddings.weight", "position_embeddings.weight", "token_type_embeddings.weight", "LayerNorm.weight", "LayerNorm.bias". Unexpected key(s) in state_dict: "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight", "bert.encoder.layer.0.attention.self.query.bias", "bert.encoder.layer.0.attention.self.key.weight", "bert.encoder.layer.0.attention.self.key.bias", "bert.encoder.layer.0.attention.self.value.weight", "bert.encoder.layer.0.attention.self.value.bias", "bert.encoder.layer.0.attention.output.dense.weight", "bert.encoder.layer.0.attention.output.dense.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.intermediate.dense.weight", "bert.encoder.layer.0.intermediate.dense.bias", "bert.encoder.layer.0.output.dense.weight", "bert.encoder.layer.0.output.dense.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.self.query.weight", "bert.encoder.layer.1.attention.self.query.bias", "bert.encoder.layer.1.attention.self.key.weight", "bert.encoder.layer.1.attention.self.key.bias", "bert.encoder.layer.1.attention.self.value.weight", "bert.encoder.layer.1.attention.self.value.bias", "bert.encoder.layer.1.attention.output.dense.weight", "bert.encoder.layer.1.attention.output.dense.bias", "bert.encoder.layer.1.attention.output.LayerNorm.weight", "bert.encoder.layer.1.attention.output.LayerNorm.bias", "bert.encoder.layer.1.intermediate.dense.weight", "bert.encoder.layer.1.intermediate.dense.bias", "bert.encoder.layer.1.output.dense.weight", "bert.encoder.layer.1.output.dense.bias", "bert.encoder.layer.1.output.LayerNorm.weight", "bert.encoder.layer.1.output.LayerNorm.bias", "bert.encoder.layer.2.attention.self.query.weight", "bert.encoder.layer.2.attention.self.query.bias", "bert.encoder.layer.2.attention.self.key.weight", "bert.encoder.layer.2.attention.self.key.bias", "bert.encoder.layer.2.attention.self.value.weight", "bert.encoder.layer.2.attention.self.value.bias", "bert.encoder.layer.2.attention.output.dense.weight", "bert.encoder.layer.2.attention.output.dense.bias", "bert.encoder.layer.2.attention.output.LayerNorm.weight", "bert.encoder.layer.2.attention.output.LayerNorm.bias", "bert.encoder.layer.2.intermediate.dense.weight", "bert.encoder.layer.2.intermediate.dense.bias", "bert.encoder.layer.2.output.dense.weight", "bert.encoder.layer.2.output.dense.bias", "bert.encoder.layer.2.output.LayerNorm.weight", "bert.encoder.layer.2.output.LayerNorm.bias", "bert.encoder.layer.3.attention.self.query.weight", "bert.encoder.layer.3.attention.self.query.bias", "bert.encoder.layer.3.attention.self.key.weight", "bert.encoder.layer.3.attention.self.key.bias", "bert.encoder.layer.3.attention.self.value.weight", "bert.encoder.layer.3.attention.self.value.bias", "bert.encoder.layer.3.attention.output.dense.weight", "bert.encoder.layer.3.attention.output.dense.bias", "bert.encoder.layer.3.attention.output.LayerNorm.weight", "bert.encoder.layer.3.attention.output.LayerNorm.bias", "bert.encoder.layer.3.intermediate.dense.weight", "bert.encoder.layer.3.intermediate.dense.bias", "bert.encoder.layer.3.output.dense.weight", "bert.encoder.layer.3.output.dense.bias", "bert.encoder.layer.3.output.LayerNorm.weight", "bert.encoder.layer.3.output.LayerNorm.bias", "bert.encoder.layer.4.attention.self.query.weight", "bert.encoder.layer.4.attention.self.query.bias", "bert.encoder.layer.4.attention.self.key.weight", "bert.encoder.layer.4.attention.self.key.bias", "bert.encoder.layer.4.attention.self.value.weight", "bert.encoder.layer.4.attention.self.value.bias", "bert.encoder.layer.4.attention.output.dense.weight", "bert.encoder.layer.4.attention.output.dense.bias", "bert.encoder.layer.4.attention.output.LayerNorm.weight", "bert.encoder.layer.4.attention.output.LayerNorm.bias", "bert.encoder.layer.4.intermediate.dense.weight", "bert.encoder.layer.4.intermediate.dense.bias", "bert.encoder.layer.4.output.dense.weight", "bert.encoder.layer.4.output.dense.bias", "bert.encoder.layer.4.output.LayerNorm.weight", "bert.encoder.layer.4.output.LayerNorm.bias", "bert.encoder.layer.5.attention.self.query.weight", "bert.encoder.layer.5.attention.self.query.bias", "bert.encoder.layer.5.attention.self.key.weight", "bert.encoder.layer.5.attention.self.key.bias", "bert.encoder.layer.5.attention.self.value.weight", "bert.encoder.layer.5.attention.self.value.bias", "bert.encoder.layer.5.attention.output.dense.weight", "bert.encoder.layer.5.attention.output.dense.bias", "bert.encoder.layer.5.attention.output.LayerNorm.weight", "bert.encoder.layer.5.attention.output.LayerNorm.bias", "bert.encoder.layer.5.intermediate.dense.weight", "bert.encoder.layer.5.intermediate.dense.bias", "bert.encoder.layer.5.output.dense.weight", "bert.encoder.layer.5.output.dense.bias", "bert.encoder.layer.5.output.LayerNorm.weight", "bert.encoder.layer.5.output.LayerNorm.bias", "bert.encoder.layer.6.attention.self.query.weight", "bert.encoder.layer.6.attention.self.query.bias", "bert.encoder.layer.6.attention.self.key.weight", "bert.encoder.layer.6.attention.self.key.bias", "bert.encoder.layer.6.attention.self.value.weight", "bert.encoder.layer.6.attention.self.value.bias", "bert.encoder.layer.6.attention.output.dense.weight", "bert.encoder.layer.6.attention.output.dense.bias", "bert.encoder.layer.6.attention.output.LayerNorm.weight", "bert.encoder.layer.6.attention.output.LayerNorm.bias", "bert.encoder.layer.6.intermediate.dense.weight", "bert.encoder.layer.6.intermediate.dense.bias", "bert.encoder.layer.6.output.dense.weight", "bert.encoder.layer.6.output.dense.bias", "bert.encoder.layer.6.output.LayerNorm.weight", "bert.encoder.layer.6.output.LayerNorm.bias", "bert.encoder.layer.7.attention.self.query.weight", "bert.encoder.layer.7.attention.self.query.bias", "bert.encoder.layer.7.attention.self.key.weight", "bert.encoder.layer.7.attention.self.key.bias", "bert.encoder.layer.7.attention.self.value...... Any idea where I am doing it wrong?
03-25-2019 18:11:36
03-25-2019 18:11:36
I realised that the way I was loading model was wrong. I used this code ``` # Save a trained model model_to_save = model.module if hasattr(model, 'module') else model # Only save the model it-self output_model_file = os.path.join(args.output_dir, "pytorch_model.bin") torch.save(model_to_save.state_dict(), output_model_file) # Load a trained model that you have fine-tuned model_state_dict = torch.load(output_model_file) model = BertForQuestionAnswering.from_pretrained(args.bert_model, state_dict=model_state_dict) model.to(device) ```
transformers
405
closed
embeddings after fine tuning
Hi, I have fine tune 'bert base uncased' using run_lm_finetuning.py script on my domain specific text corpus. I have got pytorch_model.bin file after fine tuning. Now how to load that model and get embeddings. And if I can do that, does the embeddings get changed because I have fine tuned? Please can someone guide me on this?
03-25-2019 17:46:58
03-25-2019 17:46:58
I used this code and it worked. ``` output_model_file = /path/to/pytorch_mode.bin/ model_state_dict = torch.load(output_model_file) model = BertModel.from_pretrained(bert_model, state_dict=model_state_dict) ```<|||||>Hi @KavyaGujjala I was fine-tuning the 'Bert base uncased' model as per the [Google Bert Repository](https://github.com/google-research/bert) on my domain specific data. I used the existing WordPiece vocab and ran pre-training for 50000 steps on the in-domain text to learn the compositionality. I realized that the updated embeddings were improved by manual evaluation. But I really want to add my domain words to the vocab, as they carry importance for my downstream tasks. Can you please tell how did you handle the domain vocab while fine-tuning? Did you add your domain words to the vocab file? If yes, did you append the words to the vocab file or just replaced the [unusedXX] words from the file? Did you see any improvements after fine-tuning your models? How did you evaluate the embeddings quality for your domain, if not evaluated manually? <|||||>Hi @harmanpreet93 I haven't used fine tuning code from actual google bert repo but used the pretraining code. I used finetuning code from pytorch repo but the embeddings were getting changed everytime I loaded the model. So I am just pretraining for domain specific data. My first question, you are doing fine tuning or pretraining? I have pretrained using bert base uncased model for 10000 steps. After that I got the sentence representation using [CLS] token from the final hidden layer. Compared the cosine similarity between sentences. I didnt get good results though. Dont know what I am missing. May be as you have mentioned I need to add words to vocab file. <|||||>Hi @KavyaGujjala > My first question, you are doing fine tuning or pretraining? Yes, I too used pre-training code from bert repo. After pretraining on domain data for 50k steps, the embeddings got updated. > Compared the cosine similarity between sentences. I didnt get good results though. **Q:** How did you evaluate the quality of sentence embeddings? By not getting good results, do you mean that the similar sentences weren't scored at the top or the cosine scores weren't good? In my case the similarity scores decreased, but the relevant sentences started showing up in the topk. > May be as you have mentioned I need to add words to vocab file. I'm referring to [this](https://github.com/google-research/bert/issues/9) issue from bert repo. Its suggests the following approaches: > But if you want to add more vocab you can either: (a) Just replace the "[unusedX]" tokens with your vocabulary. Since these were not used they are effectively randomly initialized. (b) Append it to the end of the vocab, and write a script which generates a new checkpoint that is identical to the pre-trained checkpoint, but but with a bigger vocab where the new embeddings are randomly initialized (for initialized we used tf.truncated_normal_initializer(stddev=0.02)). This will likely require mucking around with some tf.concat() and tf.assign() calls. **Q:** Did you try any of these approaches for the bert repo? I tried approach (b), by adding my domain words to the vocab file and updated the `vocab_size` as per [repo](https://github.com/google-research/bert#pre-training-with-bert). I ran into following error: `ValueError: Shape of variable bert/embeddings/word_embeddings:0 ((33297, 768)) doesn't match with shape of tensor bert/embeddings/word_embeddings ([30522, 768]) from checkpoint reader.` **Q:** How were the sentence embeddings calculated? By averaging the word embeddings or any other approach? <|||||>Hi @harmanpreet93 > Yes, I too used pre-training code from bert repo. After pretraining on domain data for 50k steps, the embeddings got updated. What is the size of dataset you have used? My dataset has like 1 million sentences domain specific. > Q: How did you evaluate the quality of sentence embeddings? By not getting good results, do you mean that the similar sentences weren't scored at the top or the cosine scores weren't good? yeah cosine scores aren't good enough. Even dissimilar sentences are getting better similarity scores than the similar ones. Does training for more steps solve this issue? > Q: Did you try any of these approaches for the bert repo? I tried approach (b), by adding my domain words to the vocab file and updated the vocab_size as per repo. I ran into following error: > ValueError: Shape of variable bert/embeddings/word_embeddings:0 ((33297, 768)) doesn't match with shape of tensor bert/embeddings/word_embeddings ([30522, 768]) from checkpoint reader. I haven't tried adding domain specific words to vocab but are you using the pretrained model bert_config.json file along with the domain specific trained model checkpoint. If so have you changed vocab size? And did you do it the way jacobdevlin has mentioned like writing script to initialize random weights? I didnt really understand that part, I mean what exactly are we supposed to do to initialize weights. Also why didnt you try replacing unusedX tokens? > Q: How were the sentence embeddings calculated? By averaging the word embeddings or any other approach? I used the [CLS] token embeddings from the hidden layer output ( as they have mentioned in the paper that [CLS] token can be used for sequence level classification ). I initially tried max pooling and mean pooling of word embeddings but got really bad results ( every cosine similarity score was above 0.7 ). Which approach did you follow? <|||||>To add to this, in their paper they mention they get the best results by concatenating the last four layers. ```python #! In your model setup # Indices of layers to concatenate self.bert_layers = [-1, -2, -3, -4] self.bert = BertModel.from_pretrained('your-checkpoint.pth') #! In your forward method all_bert_layers, _ = self.bert_layer(bert_ids, attention_mask=bert_mask) bert_concat = torch.cat(tuple([all_bert_layers[i] for i in self.bert_layers]), dim=-1) # If you use a mask: ## Pooling by also setting masked items to zero bert_mask = torch.FloatTensor(bert_mask).unsqueeze(2) ## Multiply output with mask to only retain non-paddding tokens bert_pooled = torch.mul(bert_concat, bert_mask) # First item ['CLS'] is sentence representation. # Use bert_concat instead of bert_pooled if you didn't use a mask final_bert = bert_pooled[:, 0, :] ```<|||||>> To add to this, in their paper they mention they get the best results by concatenating the last four layers. > > ```python > #! In your model setup > # Indices of layers to concatenate > self.bert_layers = [-1, -2, -3, -4] > self.bert = BertModel.from_pretrained('your-checkpoint.pth') > > #! In your forward method > all_bert_layers, _ = self.bert_layer(bert_ids, attention_mask=bert_mask) > bert_concat = torch.cat(tuple([all_bert_layers[i] for i in self.bert_layers]), dim=-1) > > # If you use a mask: > ## Pooling by also setting masked items to zero > bert_mask = torch.FloatTensor(bert_mask).unsqueeze(2) > ## Multiply output with mask to only retain non-paddding tokens > bert_pooled = torch.mul(bert_concat, bert_mask) > > # First item ['CLS'] is sentence representation. > # Use bert_concat instead of bert_pooled if you didn't use a mask > final_bert = bert_pooled[:, 0, :] > ``` @BramVanroy Thanks for the info. I am using original BERT [repo](https://github.com/google-research/bert) pretraining code and used extract_features.py code to get last four layers output for all the tokens in a sequence. By concatenating the last four layers does it mean adding all four layers embeddings of each token?<|||||>@KavyaGujjala If you look at my code, you can see that I mean concatenating across the hidden dim axis. So let's say the output of a single bert layer is batch_size * seq_len * hidden_dim, then concatenating the last four ones ends up with batch_size * seq_len * (hidden_dim*4).<|||||>Hi @harmanpreet93 , You ran 50k epochs for fine tuning? Thanks Mahesh <|||||>@search4mahesh Yes, I ran 50k epochs for fine-tuning!<|||||>@harmanpreet93 that must be lot of compute time, example only says about 3 epochs. <|||||>I'm assuming that @harmanpreet93 means 50k steps.<|||||>> @harmanpreet93 that must be a lot of compute time, example only says about 3 epochs. You are right @BramVanroy I meant 50k steps. <|||||>@harmanpreet93 Have you solve the problem of "ValueError: Shape of variable bert/embeddings/word_embeddings:0 ((33297, 768)) doesn't match with shape of tensor bert/embeddings/word_embeddings ([30522, 768]) from checkpoint reader."?<|||||>Hi @Firmiana1220 I was following solutions mentioned [here](https://github.com/google-research/bert/issues/9) from bert repo. It suggests the following two approaches: > But if you want to add more vocab you can either: > (a) Just replace the "[unusedX]" tokens with your vocabulary. Since these were not used they are effectively randomly initialized. > (b) Append it to the end of the vocab, and write a script which generates a new checkpoint that is identical to the pre-trained checkpoint, but with a bigger vocab where the new embeddings are randomly initialized (for initialized we used tf.truncated_normal_initializer(stddev=0.02)). This will likely require mucking around with some tf.concat() and tf.assign() calls. I was going for option (b). But I couldn't find any progress. Therefore, I decided to pre-train the model just on my domain data, and not leveraging the already pre-trained models. I'm still looking for a better approach. <|||||>@KavyaGujjala Thanks for the code sample. Is the masking in it redundant? Unless I'm misunderstanding, you mask out all the unused tokens, but then simply grab the 'CLS' token. The masking would be required if you use all the values, but seems redundant if you're just using the sentence representation.<|||||>> @KavyaGujjala Thanks for the code sample. Is the masking in it redundant? Unless I'm misunderstanding, you mask out all the unused tokens, but then simply grab the 'CLS' token. The masking would be required if you use all the values, but seems redundant if you're just using the sentence representation. Hi @snard6 , I didnt use that masking code as such, for now I got a finetuned model for my domain specific data and using the cls token for representation which gave comparatively better results. I used pytorch bert codes for getting a finetuned model and then extract features code to get the cls token as sentence representation.<|||||>> Hi @harmanpreet93 I tried the option (a), it works. I replaced the word in vocab.txt with the word in my own domain, other words are [unusedX]. If you don't change the size of vocab.txt, it works. But if your domain words are larger than 30522, maybe you should try the option (b). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
404
closed
Fix Language Modeling Loss
This fixes the language modeling loss setup for GPT and GPT-2. Minimizing the loss would previously destroy the language model within a few steps. I believe that both loss computations were incorrect, since they computed the cross-entropy without shifting the logits. Given the masking setup here, we want the ith logits to act as predictors of the (i+1)st token label (not the ith). When I was debugging this, to be sure that there's no other issue, I checked that the logits coming out of the OpenAI tensorflow and the pytorch implementation appeared to be the same (kind of a blackbox test). We can import the original model as `tf_model` and compare: ``` # Construct input sample of two 512 token 1's batch_size = 1 batch = [[1,2]*512]*batch_size batch = np.array(batch) # Attempt to seed everything seed = 42 seed_all(seed) np.random.seed(seed) tf.set_random_seed(seed) # PyTorch Implementation model = GPT2LMHeadModel.from_pretrained('gpt2') torch_batch = torch.tensor(batch) loss = model(torch_batch, lm_labels=torch_batch) print(loss.detach().numpy()) logits, presents = model(torch_batch) print(logits.detach().numpy()) # TensorFlow implementation with tf.Session() as sess: hparams = tf_model.default_hparams() hparams.override_from_dict({ "n_vocab": 50257, "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_layer": 12 }) # Set up graph and loss as I believe is correct similar to https://github.com/nshepperd/gpt-2 context = tf.placeholder(tf.int32, [batch_size, None]) output = tf_model.model(hparams=hparams, X=context) logits = output['logits'] loss = tf.reduce_mean( tf.nn.sparse_softmax_cross_entropy_with_logits( labels=context[:, 1:], logits=logits[:, :-1])) train_vars = [v for v in tf.trainable_variables() if 'model' in v.name] # Load checkpoint ckpt = tf.train.latest_checkpoint('gpt_tf/models/117M') saver = tf.train.Saver(var_list=train_vars) saver.restore(sess, ckpt) # Run! tf_logits, tf_loss = sess.run((logits, loss), feed_dict={context: batch}) print(tf_loss) print(tf_logits) # Other differences param_optimizer = list(model.named_parameters()) print("torch:") print([n for n, p in param_optimizer]) print("tensorflow:") print([v.name for v in tf.trainable_variables()]) # They look like the same 147 params... ``` *This gave us for torch:* - Loss: `11.067907` - Logits: ``` [[[ -32.901043 -31.20237 -34.66221 ... -39.486702 -39.87312 -32.238667] [ -55.52076 -53.428535 -56.4767 ... -68.153885 -66.77085 -58.600616] [ -59.22766 -58.769135 -54.14502 ... -64.58172 -65.165565 -57.34685 ] ... [-261.01627 -245.20702 -258.9687 ... -285.7149 -292.39874 -260.91663 ] [-256.27637 -251.34431 -242.03348 ... -280.47235 -287.56 -256.33374 ] [-261.22495 -245.68788 -258.8527 ... -286.35617 -292.90662 -261.41626 ]]] ``` *...and for TensorFlow:* - Loss: `0.019036641 ` - Logits: ``` [[[ -32.9011 -31.202427 -34.662262 ... -39.486755 -39.87316 -32.238716] [ -55.52075 -53.42854 -56.476707 ... -68.153885 -66.770874 -58.60062 ] [ -59.227722 -58.76918 -54.14507 ... -64.58176 -65.165596 -57.346878] ... [-261.01633 -245.20723 -258.96875 ... -285.71472 -292.3988 -260.91663 ] [-256.27637 -251.3442 -242.03352 ... -280.4723 -287.56 -256.33374 ] [-261.22498 -245.68774 -258.85263 ... -286.3562 -292.90668 -261.41626 ]]] ``` Shifting the logits so that tokens < n predict the nth token like in the TF example above should fix this.
03-24-2019 21:20:33
03-24-2019 21:20:33
I fixed the test failure above in `472857c`. Note that this occurred because I was running torch > 1.0. They fixed the contiguous view issue in https://github.com/pytorch/pytorch/issues/3653. In my view, it would probably make sense to update torch to >1.0 and remove `contiguous()` calls where possible throughout this codebase.<|||||>Hi @CatalinVoss, This looks good, thanks! We'll try to keep backward compatibility up to pytorch 0.4.1 for the moment since quite a lot of downstream libraries use this package (like AllenNLP, fair, ParlAI...). So it's good to have the `contiguous()` calls, thanks for that. Do you want to add a simple script showcasing finetuning GPT-2 on some dataset (open question, you don't need to)?<|||||>OK, thanks! That all makes sense. Yes, I can add a demo finetuning script, but I probably won't get to it immediately, so probably best to do it in another PR, if that's OK.<|||||>Ok good to merge, thanks @CatalinVoss!
transformers
403
closed
[Question]Embedding Generate Problem
When i use 'bert-base-uncase' model to get english embeddings,I can get accurate and unchangeable tensor.But when I do same things on 'bert-base-chinese',I input same sequence every times,but I get different tensors.My input senquece like ''[CLS]+sequence+[SEP]"
03-24-2019 16:13:03
03-24-2019 16:13:03
Hi, How did you get the english embeddings using the bert-base-uncase model. I have fine tuned the model on domain specific corpus. I am trying to get embeddings now using the pytorch_model.bin I got after fine tuning. Any idea on how to do this?<|||||>Are your models in evaluation mode (`model.eval()`)? Usually reproductibility issues comes from forgetting to disable the DropOut modules using `model.eval()`.<|||||>> Are your models in evaluation mode (`model.eval()`)? > Usually reproductibility issues comes from forgetting to disable the DropOut modules using `model.eval()`. Oh,it's really helpful.My rediculous mistake.Thank you for your answer!!!<|||||>@SupUnicorn Following the above comment, could you please let me know , during the evaluation while reproducing the results using a finetuned model, how to disable the DropOut modules ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
402
closed
gpt2 tokenizer issue with ValueError: chr() arg not in range(256) in Python 2.X
See [the code](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization_gpt2.py#L50) Here's a solution for python 2.X ```python @lru_cache() def bytes_to_unicode(): """ Returns list of utf-8 byte and a corresponding list of unicode strings. The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a signficant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings. And avoids mapping to whitespace/control characters the bpe code barfs on. """ bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1)) cs = bs[:] n = 0 for b in range(2**8): if b not in bs: bs.append(b) cs.append(2**8+n) n += 1 cs = [unichr(n) for n in cs] return dict(zip(bs, cs) ``` And Change the encode to be ```python def encode(self, text): bpe_tokens = [] for token in re.findall(self.pat, text): token = ''.join(self.byte_encoder[ord(b)] for b in token.encode('utf-8')) bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' ')) if len(bpe_tokens) > self.max_len: raise ValueError( "Token indices sequence length is longer than the specified maximum " " sequence length for this OpenAI GPT-2 model ({} > {}). Running this" " sequence through the model will result in indexing errors".format(len(bpe_tokens), self.max_len) ) return bpe_tokens ```
03-24-2019 07:57:09
03-24-2019 07:57:09
Indeed, I have added backward compatibility to python 2 for GPT-2. Do you want to submit a PR on this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
401
closed
How can I generate new text after having fine-tuned BERT on a custom dataset ?
Hey, Once I've fine-tuned the Language Model, how can I get it to generate new text ? Is there any example available ? Thanks !
03-23-2019 16:20:21
03-23-2019 16:20:21
Also interested in this! <|||||>Hi, It's quite difficult to use BERT to generate text as BERT is not a causal language model per se. Here is an example: https://github.com/nyu-dl/bert-gen by @W4ngatang and @kyunghyuncho.<|||||>Bert was not trained for text generation since it's not trained in the classical lm setting. However there are some new approaches that doesn't rely on next word predictions in the classical lm way. Have a look at: [Insertion Transformer](https://arxiv.org/abs/1902.03249) and [Insertion-based Decoding](https://arxiv.org/abs/1902.01370).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
400
closed
how to freeze bert model and just train a classifier?
How to freeze all layers of bert and just train task based classifier?
03-23-2019 13:56:18
03-23-2019 13:56:18
Hi the BERT models are regular PyTorch models, you can just use the usual way we freeze layers in PyTorch. For example you can have a look at the [Transfer Learning tutorial of PyTorch](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html#convnet-as-fixed-feature-extractor). In our case freezing the pretrained part of a `BertForSequenceClassification` model would look like this ```python model = BertForSequenceClassification.from_pretrained('bert-base-uncased') for param in model.bert.parameters(): param.requires_grad = False ``` Then only the classification layer should be have `requires_grad=True`.<|||||>Wouldn't this solution freeze all layers, including the classifier layer?<|||||>@nuno-carneiro @thomwolf I think, this will freeze all the layers including the classifier layer. (Correct me, if I'm wrong) ``` model = BertForSequenceClassification.from_pretrained('bert-base-uncased') for param in model.bert.parameters(): param.requires_grad = False ``` **I think the below code will freeze only the BERT layers (Correct me, if I'm wrong)** ``` model = BertForSequenceClassification.from_pretrained('bert-base-uncased') for param in model.bert.bert.parameters(): param.requires_grad = False ```<|||||>You are right @kalyanks0611 <|||||>AttributeError: 'BertModel' object has no attribute 'bert' I got the above error when doing for param in model.bert.bert.parameters(): param.requires_grad = False Any clue? Thanks :)<|||||>@yhl48 Hi, please open a new issue with the problem you're facing. Thank you!<|||||>does anyone know how to do this for the TF 2.0 version too?<|||||>> AttributeError: 'BertModel' object has no attribute 'bert' > > I got the above error when doing > > for param in model.bert.bert.parameters(): > param.requires_grad = False > > Any clue? Thanks :) I got the same error. This is my workaround following the one in #1431 : for name, param in model.named_parameters(): if 'classifier' not in name: # classifier layer param.requires_grad = False<|||||>You're probably instantiating your model as a `BertModel` whereas @kalyanks0611's example uses `BertForSequenceClassification`.<|||||>> does anyone know how to do this for the TF 2.0 version too? This is the real question<|||||>> does anyone know how to do this for the TF 2.0 version too? Actually in tf2 framework, pre-trained is saved in `model.bert.weights`, which is a list. So you need to do: ``` for w in model.bert.weights(): w._trainable= False ``` <|||||>> > does anyone know how to do this for the TF 2.0 version too? > > This is the real question see [my post](https://github.com/huggingface/transformers/issues/400#issuecomment-571354368)<|||||>> does anyone know how to do this for the TF 2.0 version too? model.bert.trainable = False Is this ok? But I found that the training speed has not accelerated<|||||>> > does anyone know how to do this for the TF 2.0 version too? > > Actually in tf2 framework, pre-trained is saved in `model.bert.weights`, which is a list. So you need to do: > > ``` > for w in model.bert.weights(): > w._trainable= False > ``` Thanks! I needed a very slight amendment for it to work. ``` for w in model.get_layer('tf_distil_bert_model').weights: w._trainable = False ``` Works for me. And training time is halved from 9 hours to 4 hours.<|||||>I am trying to let bert model layer 10 and layer 11 are trainable, like below: ```shell bert_model = TFBertModel.from_pretrained("bert-base-uncased") for w in bert_model.bert.weights: if w.name.find('layer_._10') == -1 and w.name.find('layer_._11') == -1: print(w.name) w._trainable = False ``` But the result is as below: ```shell Total params: 109,483,776 Trainable params: 109,483,776 Non-trainable params: 0 ``` How can i change the specific layers to be trainable? Thanks<|||||>> > > I am trying to let bert model layer 10 and layer 11 are trainable, like below: > > ```shell > bert_model = TFBertModel.from_pretrained("bert-base-uncased") > for w in bert_model.bert.weights: > if w.name.find('layer_._10') == -1 and w.name.find('layer_._11') == -1: > print(w.name) > w._trainable = False > ``` > > But the result is as below: > > ```shell > Total params: 109,483,776 > Trainable params: 109,483,776 > Non-trainable params: 0 > ``` > > How can i change the specific layers to be trainable? > > Thanks The codes below would successfully freezes layers in a model: ``` from transformers import TFXLNetModel as transformerModel transformer_model_name = 'xlnet-base-cased' transformer_model = transformerModel.from_pretrained(transformer_model_name) for layer in transformer_model.layers: layer.trainable = False ```<|||||>> @nuno-carneiro @thomwolf > I think, this will freeze all the layers including the classifier layer. (Correct me, if I'm wrong) > > ``` > model = BertForSequenceClassification.from_pretrained('bert-base-uncased') > > for param in model.bert.parameters(): > param.requires_grad = False > ``` > > **I think the below code will freeze only the BERT layers (Correct me, if I'm wrong)** > > ``` > model = BertForSequenceClassification.from_pretrained('bert-base-uncased') > > for param in model.bert.bert.parameters(): > param.requires_grad = False > ``` Hi, I have > @nuno-carneiro @thomwolf > I think, this will freeze all the layers including the classifier layer. (Correct me, if I'm wrong) > > ``` > model = BertForSequenceClassification.from_pretrained('bert-base-uncased') > > for param in model.bert.parameters(): > param.requires_grad = False > ``` > > **I think the below code will freeze only the BERT layers (Correct me, if I'm wrong)** > > ``` > model = BertForSequenceClassification.from_pretrained('bert-base-uncased') > > for param in model.bert.bert.parameters(): > param.requires_grad = False > ``` I think there are 2 elements in BertForSequenceClassification. One is bert and another one is classifier?or? So I think we dont need model.bert.bert.parameter(). We just need model.bert.parameters()? or?<|||||>`model = BertForSequenceClassification.from_pretrained('bert-base-uncased')` When you run `model` you will get the following architecture (I skipped many intermediate modules) ``` BertForSequenceClassification( (bert): BertModel( (embeddings): BertEmbeddings( .... (encoder): BertEncoder( (layer): ModuleList( (0): BertLayer( ... (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ... (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) (dropout): Dropout(p=0.1, inplace=False) (classifier): Linear(in_features=768, out_features=2, bias=True) ) ``` So basically `model` has 3 main submodules `bert`, `dropout`, and `classifier` (you can see this from the indentation as well.). Try running `model.bert`, `model.classifier`. When you call `model.bert `and freeze all the params, it will freeze entire encoder blocks(12 of them). Therefore, the following code ``` for param in model.bert.bert.parameters(): param.requires_grad = False ``` should be ``` for param in model.bert.parameters(): param.requires_grad = False ```<|||||>> `model = BertForSequenceClassification.from_pretrained('bert-base-uncased')` > > When you run `model` you will get the following architecture (I skipped many intermediate modules) > > ``` > BertForSequenceClassification( > (bert): BertModel( > (embeddings): BertEmbeddings( > .... > (encoder): BertEncoder( > (layer): ModuleList( > (0): BertLayer( > ... > (output): BertSelfOutput( > (dense): Linear(in_features=768, out_features=768, bias=True) > (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) > (dropout): Dropout(p=0.1, inplace=False) > ) > ) > ... > (pooler): BertPooler( > (dense): Linear(in_features=768, out_features=768, bias=True) > (activation): Tanh() > ) > ) > (dropout): Dropout(p=0.1, inplace=False) > (classifier): Linear(in_features=768, out_features=2, bias=True) > ) > ``` > > So basically `model` has 3 main submodules `bert`, `dropout`, and `classifier` (you can see this from the indentation as well.). Try running `model.bert`, `model.classifier`. When you call `model.bert `and freeze all the params, it will freeze entire encoder blocks(12 of them). > > Therefore, the following code > > ``` > for param in model.bert.bert.parameters(): > param.requires_grad = False > ``` > > should be > > ``` > for param in model.bert.parameters(): > param.requires_grad = False > ``` I think you are right. And I think there are no model.bert.bert? Or?<|||||>> > `model = BertForSequenceClassification.from_pretrained('bert-base-uncased')` > > When you run `model` you will get the following architecture (I skipped many intermediate modules) > > ``` > > BertForSequenceClassification( > > (bert): BertModel( > > (embeddings): BertEmbeddings( > > .... > > (encoder): BertEncoder( > > (layer): ModuleList( > > (0): BertLayer( > > ... > > (output): BertSelfOutput( > > (dense): Linear(in_features=768, out_features=768, bias=True) > > (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) > > (dropout): Dropout(p=0.1, inplace=False) > > ) > > ) > > ... > > (pooler): BertPooler( > > (dense): Linear(in_features=768, out_features=768, bias=True) > > (activation): Tanh() > > ) > > ) > > (dropout): Dropout(p=0.1, inplace=False) > > (classifier): Linear(in_features=768, out_features=2, bias=True) > > ) > > ``` > > > > > > So basically `model` has 3 main submodules `bert`, `dropout`, and `classifier` (you can see this from the indentation as well.). Try running `model.bert`, `model.classifier`. When you call `model.bert `and freeze all the params, it will freeze entire encoder blocks(12 of them). > > Therefore, the following code > > ``` > > for param in model.bert.bert.parameters(): > > param.requires_grad = False > > ``` > > > > > > should be > > ``` > > for param in model.bert.parameters(): > > param.requires_grad = False > > ``` > > I think you are right. And I think there are no model.bert.bert? Or? I think so, if you want to freeze bert part in bert model, u just say for param in model.bert.parameters(): param.requires_grad = False but how are about optizmer ? is look like in this way ? optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001)<|||||>> > > `model = BertForSequenceClassification.from_pretrained('bert-base-uncased')` > > > When you run `model` you will get the following architecture (I skipped many intermediate modules) > > > ``` > > > BertForSequenceClassification( > > > (bert): BertModel( > > > (embeddings): BertEmbeddings( > > > .... > > > (encoder): BertEncoder( > > > (layer): ModuleList( > > > (0): BertLayer( > > > ... > > > (output): BertSelfOutput( > > > (dense): Linear(in_features=768, out_features=768, bias=True) > > > (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) > > > (dropout): Dropout(p=0.1, inplace=False) > > > ) > > > ) > > > ... > > > (pooler): BertPooler( > > > (dense): Linear(in_features=768, out_features=768, bias=True) > > > (activation): Tanh() > > > ) > > > ) > > > (dropout): Dropout(p=0.1, inplace=False) > > > (classifier): Linear(in_features=768, out_features=2, bias=True) > > > ) > > > ``` > > > > > > > > > So basically `model` has 3 main submodules `bert`, `dropout`, and `classifier` (you can see this from the indentation as well.). Try running `model.bert`, `model.classifier`. When you call `model.bert `and freeze all the params, it will freeze entire encoder blocks(12 of them). > > > Therefore, the following code > > > ``` > > > for param in model.bert.bert.parameters(): > > > param.requires_grad = False > > > ``` > > > > > > > > > should be > > > ``` > > > for param in model.bert.parameters(): > > > param.requires_grad = False > > > ``` > > > > > > I think you are right. And I think there are no model.bert.bert? Or? > > I think so, if you want to freeze bert part in bert model, u just say > for param in model.bert.parameters(): > param.requires_grad = False > but how are about optizmer ? > > is look like in this way ? > optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001) I think you have written right code. But we should write usually 2 parts together. I mean: for param in model.bert.parameters(): param.requires_grad = False optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001) Cuz we need to know which parameters are frozen. <|||||>> > > > `model = BertForSequenceClassification.from_pretrained('bert-base-uncased')` > > > > When you run `model` you will get the following architecture (I skipped many intermediate modules) > > > > ``` > > > > BertForSequenceClassification( > > > > (bert): BertModel( > > > > (embeddings): BertEmbeddings( > > > > .... > > > > (encoder): BertEncoder( > > > > (layer): ModuleList( > > > > (0): BertLayer( > > > > ... > > > > (output): BertSelfOutput( > > > > (dense): Linear(in_features=768, out_features=768, bias=True) > > > > (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) > > > > (dropout): Dropout(p=0.1, inplace=False) > > > > ) > > > > ) > > > > ... > > > > (pooler): BertPooler( > > > > (dense): Linear(in_features=768, out_features=768, bias=True) > > > > (activation): Tanh() > > > > ) > > > > ) > > > > (dropout): Dropout(p=0.1, inplace=False) > > > > (classifier): Linear(in_features=768, out_features=2, bias=True) > > > > ) > > > > ``` > > > > > > > > > > > > So basically `model` has 3 main submodules `bert`, `dropout`, and `classifier` (you can see this from the indentation as well.). Try running `model.bert`, `model.classifier`. When you call `model.bert `and freeze all the params, it will freeze entire encoder blocks(12 of them). > > > > Therefore, the following code > > > > ``` > > > > for param in model.bert.bert.parameters(): > > > > param.requires_grad = False > > > > ``` > > > > > > > > > > > > should be > > > > ``` > > > > for param in model.bert.parameters(): > > > > param.requires_grad = False > > > > ``` > > > > > > > > > I think you are right. And I think there are no model.bert.bert? Or? > > > > > > I think so, if you want to freeze bert part in bert model, u just say > > for param in model.bert.parameters(): > > param.requires_grad = False > > but how are about optizmer ? > > is look like in this way ? > > optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001) > > I think you have written right code. But we should write usually 2 parts together. I mean: > for param in model.bert.parameters(): > param.requires_grad = False > optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001) > Cuz we need to know which parameters are frozen. sorry, I do not know what do you mean? so let say, if we just want to freeze the input part ( input embedding). how can I write the code? bert = BertModel.from_pretrained('bert-base-uncased') for name, param in bert.named_parameters(): if name.startswith('embeddings'): param.requires_grad = False this above three line, is tell model do not train the embedding right? but how to tell the optimizer to do not change the embedding? <|||||>> > > > > `model = BertForSequenceClassification.from_pretrained('bert-base-uncased')` > > > > > When you run `model` you will get the following architecture (I skipped many intermediate modules) > > > > > ``` > > > > > BertForSequenceClassification( > > > > > (bert): BertModel( > > > > > (embeddings): BertEmbeddings( > > > > > .... > > > > > (encoder): BertEncoder( > > > > > (layer): ModuleList( > > > > > (0): BertLayer( > > > > > ... > > > > > (output): BertSelfOutput( > > > > > (dense): Linear(in_features=768, out_features=768, bias=True) > > > > > (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) > > > > > (dropout): Dropout(p=0.1, inplace=False) > > > > > ) > > > > > ) > > > > > ... > > > > > (pooler): BertPooler( > > > > > (dense): Linear(in_features=768, out_features=768, bias=True) > > > > > (activation): Tanh() > > > > > ) > > > > > ) > > > > > (dropout): Dropout(p=0.1, inplace=False) > > > > > (classifier): Linear(in_features=768, out_features=2, bias=True) > > > > > ) > > > > > ``` > > > > > > > > > > > > > > > So basically `model` has 3 main submodules `bert`, `dropout`, and `classifier` (you can see this from the indentation as well.). Try running `model.bert`, `model.classifier`. When you call `model.bert `and freeze all the params, it will freeze entire encoder blocks(12 of them). > > > > > Therefore, the following code > > > > > ``` > > > > > for param in model.bert.bert.parameters(): > > > > > param.requires_grad = False > > > > > ``` > > > > > > > > > > > > > > > should be > > > > > ``` > > > > > for param in model.bert.parameters(): > > > > > param.requires_grad = False > > > > > ``` > > > > > > > > > > > > I think you are right. And I think there are no model.bert.bert? Or? > > > > > > > > > I think so, if you want to freeze bert part in bert model, u just say > > > for param in model.bert.parameters(): > > > param.requires_grad = False > > > but how are about optizmer ? > > > is look like in this way ? > > > optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001) > > > > > > I think you have written right code. But we should write usually 2 parts together. I mean: > > for param in model.bert.parameters(): > > param.requires_grad = False > > optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001) > > Cuz we need to know which parameters are frozen. > > sorry, I do not know what do you mean? > so let say, if we just want to freeze the input part ( input embedding). how can I write the code? > bert = BertModel.from_pretrained('bert-base-uncased') > > for name, param in bert.named_parameters(): > if name.startswith('embeddings'): > param.requires_grad = False > this above three line, is tell model do not train the embedding right? > > but how to tell the optimizer to do not change the embedding? these three lines have set False in 'Embedding' right? Then we need just to write what you have written: optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001)<|||||>Are any of the layers in a pre-trained BERT model originally frozen? If not are there any supporting articles or research papers that I can look to? <|||||>> Are any of the layers in a pre-trained BERT model originally frozen? If not are there any supporting articles or research papers that I can look to? I dont think there are any of layers originally frozen. You can read some Papers about Bert firstly, then in Pytoch forum or Tensorflow you could learn how to freeze the layers.<|||||>In tensorflow, other than the classifier layers, to freeze some layers in a underlying base transformer model something like this maybe needed(corrections are welcome): ``` bertmodel = TFAutoModelForSequenceClassification.from_pretrained(bertModelName, num_labels=output_num_classes, from_pt=True) for i,v in enumerate(bertmodel.layers[0].variables): if i>=19: bertmodel.layers[0].variables[i] = tf.stop_gradient(v) print("do freeze {} {}".format(i, v.name)) else: print("not freeze {} {}".format(i, v.name)) ```<|||||>If I want to freeze the entire model, I can use model.trainable = False and vice vs, however, if I want to freeze part of it, I am setting that up in the loop. It's still not updating the trainable flag. I am using TensorFlow. Any thoughts? ``` # Adjust the trainable layer weights based on retrain_layer_count # If retrain_layer_count is 0, then base model is frozen. # If retrain_layer_count is 12, then the entire base model is trainable. # And that implies that all the pretrained weights are lost and it relearns # from the input data. # If retrain_layer_count is between 1 and 11, then the last n layers of # the pretrained model retrained. if retrain_layer_count == 0: # The pretained model is frozen model.trainable = False elif retrain_layer_count == 12: # The pretrained model is retrained thru all layers. model.trainable = True else: # Restrict training to the num_train_layers outer transformer layers retrain_layer_list = [] #model.trainable = False for retrain_layer_number in range(retrain_layer_count): layer_code = '_' + str(11 - retrain_layer_number) retrain_layer_list.append(layer_code) print('Retrain layers: \n', retrain_layer_list) print("After adjusting...") for layer in model.weights: layer._trainable = False print("***", layer.name, layer._trainable) if 'layer_' in layer.name and layer.name.split(".")[1].split("/")[0] in retrain_layer_list: layer._trainable = True print("$$$", layer.name, layer._trainable) elif 'layer_' not in layer.name : layer._trainable = True print("###", layer.name, layer._trainable) #for weight_details in model.weights: # print(weight_details.name, weight_details._trainable) print(f"Number of trainable parameters : {count_params(model.trainable_weights)}") print(f"Number of non-trainable parameters : {count_params(model.non_trainable_variables)}") ``` <|||||>> If I want to freeze the entire model, I can use model.trainable = False and vice vs, however, if I want to freeze part of it, I am setting that up in the loop. It's still not updating the trainable flag. I am using TensorFlow. Any thoughts? > > ``` > # Adjust the trainable layer weights based on retrain_layer_count > # If retrain_layer_count is 0, then base model is frozen. > # If retrain_layer_count is 12, then the entire base model is trainable. > # And that implies that all the pretrained weights are lost and it relearns > # from the input data. > # If retrain_layer_count is between 1 and 11, then the last n layers of > # the pretrained model retrained. > if retrain_layer_count == 0: > # The pretained model is frozen > model.trainable = False > > elif retrain_layer_count == 12: > # The pretrained model is retrained thru all layers. > model.trainable = True > > else: > # Restrict training to the num_train_layers outer transformer layers > retrain_layer_list = [] > #model.trainable = False > for retrain_layer_number in range(retrain_layer_count): > > layer_code = '_' + str(11 - retrain_layer_number) > retrain_layer_list.append(layer_code) > > print('Retrain layers: \n', retrain_layer_list) > print("After adjusting...") > for layer in model.weights: > layer._trainable = False > print("***", layer.name, layer._trainable) > if 'layer_' in layer.name and layer.name.split(".")[1].split("/")[0] in retrain_layer_list: > layer._trainable = True > print("$$$", layer.name, layer._trainable) > elif 'layer_' not in layer.name : > layer._trainable = True > print("###", layer.name, layer._trainable) > > #for weight_details in model.weights: > # print(weight_details.name, weight_details._trainable) > print(f"Number of trainable parameters : {count_params(model.trainable_weights)}") > print(f"Number of non-trainable parameters : {count_params(model.non_trainable_variables)}") > ``` I ran into this issue too, where if I first freeze all layers, and then selectively make some trainable, it does not work. Do it the other way round: set everything trainable and then freeze the layers you want!
transformers
399
closed
Is the GPT-2 pretrained model language agnostic?
I'm trying to build a language model that trains on a Polish corpus. And I'm wondering if the GPT-2 pretrained model you present supports that, or if it's English only. Thank You.
03-22-2019 20:45:26
03-22-2019 20:45:26
Hi Aly, GPT-2 is pretrained on an English only corpus.<|||||>Hi @thomwolf , thank you for the clarification.
transformers
398
closed
Multi GPU
This is an incomplete proof-of-concept of how to run BERT across multiple GPUs. It will take advantage of multiple GPUs' memory, but not of their compute cores. Do not merge
03-22-2019 00:41:21
03-22-2019 00:41:21
Hi @dirkgr, thanks for this PR. I think we will keep all the GPU/multi-GPU logic outside of the main library for now. It makes it easier to integrate the module in downstream libraries and integrating such modifications at the current stage would cause too many breaking changes for the users unfortunately. So I'm closing this PR for now.<|||||>Oh, this was only meant for internal consumption. That said, I thought a bit about how to make it general enough to properly integrate it. I thought the setting for multi-GPU could be part of the config, so it wouldn't affect normal operation. How would you want to do it? I didn't get that much interest in this patch internally though, so this is not the most important project right now.
transformers
397
closed
Allow do_lower_case regardless of do_basic_tokenize
It would make sense to me that you could set `do_lower_case=True` even when `do_basic_tokenize=False`. If your input has already been tokenized (but not lower-cased), you still want to lowercase it. As the code is currently written, that does not seem possible as [the BasicTokenizer is responsible for the lowercasing](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization.py#L101-L105): if do_basic_tokenize: self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case, never_split=never_split) self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab) self.max_len = max_len if max_len is not None else int(1e12) I would then propose to add lowercase to its own method or to add a do_lower_case to WordpieceTokenizer.
03-21-2019 14:56:09
03-21-2019 14:56:09
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@thomwolf Please re-open. If I have the time, I can try to work on this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Bump so I don't forget about this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
396
closed
add tqdm to the process of eval in examples/run_swag.py
Maybe better.
03-21-2019 13:04:31
03-21-2019 13:04:31
tests/tokenization_openai_test.py::OpenAIGPTTokenizationTest::test_full_tokenizer FAILED<|||||>Ok, thanks!
transformers
395
closed
AttributeError: 'BertOnlyMLMHead' object has no attribute 'seq_relationship'
Is there way to fix it? ```bash Skipping cls/predictions/transform/dense/kernel/adam_m Skipping cls/predictions/transform/dense/kernel/adam_v --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-11-5d7ca59bd98d> in <module>() 7 model = BertForMaskedLM.from_pretrained( 8 pretrained_model_name_or_path=model_folder, ----> 9 from_tf=True, cache_dir=None) 10 #model = BertForMaskedLM.from_pretrained(model_version) 11 /content/my_pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py in from_pretrained(cls, pretrained_model_name_or_path, state_dict, cache_dir, from_tf, *inputs, **kwargs) 605 # Directly load from a TensorFlow checkpoint 606 weights_path = os.path.join(serialization_dir, TF_WEIGHTS_NAME) --> 607 return load_tf_weights_in_bert(model, weights_path) 608 # Load from a PyTorch state_dict 609 old_keys = [] /content/my_pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path) 100 # 101 else: --> 102 pointer = getattr(pointer, l[0]) 103 ''' 104 #J /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 533 return modules[name] 534 raise AttributeError("'{}' object has no attribute '{}'".format( --> 535 type(self).__name__, name)) 536 537 def __setattr__(self, name, value): AttributeError: 'BertOnlyMLMHead' object has no attribute 'seq_relationship' ```
03-21-2019 05:57:02
03-21-2019 05:57:02
Hi, What command are you running to load the model? Are you loading your own model or one or our pre-trained one? It's normal that there is no `seq_relationship` attribute in a `BertOnlyMLMHead` but our pre-trained model should load without error.<|||||> I was loading my own pre-trained model by "convert_tf_checkpoint_to_pytorch.py". Would it be ok if I skip "seq_relationship" in the source code? <|||||>Maybe, I don't have enough information to really be able to help you at this stage.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
394
closed
Minor change in README
Spelling fix of: weigths to weights
03-21-2019 05:04:13
03-21-2019 05:04:13
Thanks!
transformers
393
closed
AttributeError: 'BertForPreTraining' object has no attribute 'shape'
Is there any suggestion for fixing the following? I was trying "convert_tf_checkpoint_to_pytorch.py" to convert a model trained from scratch but the conversion didn't work out.... ```bash Skipping cls/seq_relationship/output_weights/adam_v Traceback (most recent call last): File "pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 66, in <module> args.pytorch_dump_path) File "pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 37, in convert_tf_checkpoint_to_pytorch load_tf_weights_in_bert(model, tf_checkpoint_path) File "/content/my_pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py", line 117, in load_tf_weights_in_bert assert pointer.shape == array.shape File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 535, in __getattr__ type(self).__name__, name)) AttributeError: 'BertForPreTraining' object has no attribute 'shape' ```
03-20-2019 23:48:16
03-20-2019 23:48:16
Hi, Is it a model trained from the original Google BERT Tensorflow implementation?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I get the same error as @leejason. I used the pre-trained BERT base uncased model from the original TF implementation and fine-tuned it on my own training set. I am now trying to use the fine-tuned model to do masked LM. <|||||>I also have similar issue. I pretrained a bert **from scratch** using **nvidia implementation** with **customized config file and vocab**. Then I use ``` convert_tf_checkpoint_to_pytorch.convert_tf_checkpoint_to_pytorch(BERT_MODEL_PATH + 'model.ckpt', BERT_MODEL_PATH + 'bert_config.json', BERT_MODEL_PATH + 'pytorch_model.bin') ``` ``` ... Loading TF weight cls/predictions/transform/dense/bias with shape [512] Loading TF weight cls/predictions/transform/dense/bias/adam_m with shape [512] Loading TF weight cls/predictions/transform/dense/bias/adam_v with shape [512] Loading TF weight cls/predictions/transform/dense/kernel with shape [512, 512] Loading TF weight cls/predictions/transform/dense/kernel/adam_m with shape [512, 512] Loading TF weight cls/predictions/transform/dense/kernel/adam_v with shape [512, 512] Loading TF weight cls/seq_relationship/output_bias with shape [2] Loading TF weight cls/seq_relationship/output_weights with shape [2, 512] Loading TF weight global_step with shape [] Loading TF weight good_steps with shape [] Loading TF weight loss_scale with shape [] Skipping bad_steps --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-20-f28c5910adfc> in <module> ----> 1 bert = BertModel.from_pretrained(BERT_MODEL_PATH, from_tf=True).bert() ~/InEx/input/huggingface/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 611 # Directly load from a TensorFlow checkpoint 612 weights_path = os.path.join(serialization_dir, TF_WEIGHTS_NAME) --> 613 return load_tf_weights_in_bert(model, weights_path) 614 # Load from a PyTorch state_dict 615 old_keys = [] ~/InEx/input/huggingface/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path) 107 array = np.transpose(array) 108 try: --> 109 assert pointer.shape == array.shape 110 except AssertionError as e: 111 e.args += (pointer.shape, array.shape) ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __getattr__(self, name) 537 return modules[name] 538 raise AttributeError("'{}' object has no attribute '{}'".format( --> 539 type(self).__name__, name)) 540 541 def __setattr__(self, name, value): AttributeError: 'BertModel' object has no attribute 'shape'```<|||||>I solve this problem by bypass some variables in the model, such as "bad_steps", “global_step", "good_steps", "loss_scale". They don't have attribute 'shape‘ and I don't need them when fineturning the model. In modeling.py, line 121, replace it with if any(n in ["adam_v", "adam_m", "global_step", "bad_steps", "global_step", "good_steps", "loss_scale"] for n in name): and delete line 151-156.<|||||>> I solve this problem by bypass some variables in the model, such as "bad_steps", “global_step", "good_steps", "loss_scale". They don't have attribute 'shape‘ and I don't need them when fineturning the model. > > In modeling.py, line 121, replace it with > if any(n in ["adam_v", "adam_m", "global_step", "bad_steps", "global_step", "good_steps", "loss_scale"] for n in name): > and delete line 151-156. It works. Thanks very much !<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I'm getting a similar error when trying to convert the newer BERT models released at [tensorflow/models/tree/master/official/nlp/](https://github.com/tensorflow/models/tree/master/official/nlp/bert#pre-trained-models). These models are either BERT models trained with Keras or else checkpoints converted from the original [google-research/bert](https://github.com/google-research/bert) repository. I also get the same error when I convert the TF1 to TF2 checkpoints myself using the [tf2_encoder_checkpoint_converter.py](https://github.com/tensorflow/models/blob/master/official/nlp/bert/tf2_encoder_checkpoint_converter.py) script: What I have tried: First, I have downloaded a model: ``` wget https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/cased_L-12_H-768_A-12.tar.gz # or wget https://storage.googleapis.com/cloud-tpu-checkpoints/bert/tf_20/cased_L-12_H-768_A-12.tar.gz ``` After unpacking: ``` export BERT_BASE_DIR=cased_L-12_H-768_A-12 transformers-cli convert --model_type bert \ --tf_checkpoint $BERT_BASE_DIR/bert_model.ckpt \ --config $BERT_BASE_DIR/bert_config.json \ --pytorch_dump_output $BERT_BASE_DIR/pytorch_model.bin ``` The command prints the configuration but throws the following error: ``` INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_layer/_value_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [768, 12, 64] INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_layer_norm/beta/.ATTRIBUTES/VARIABLE_VALUE with shape [768] INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_layer_norm/gamma/.ATTRIBUTES/VARIABLE_VALUE with shape [768] INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_output_dense/bias/.ATTRIBUTES/VARIABLE_VALUE with shape [768] INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_output_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [12, 64, 768] INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_intermediate_dense/bias/.ATTRIBUTES/VARIABLE_VALUE with shape [3072] INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_intermediate_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [768, 3072] INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_dense/bias/.ATTRIBUTES/VARIABLE_VALUE with shape [768] INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [3072, 768] INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_layer_norm/beta/.ATTRIBUTES/VARIABLE_VALUE with shape [768] INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_layer_norm/gamma/.ATTRIBUTES/VARIABLE_VALUE with shape [768] INFO:transformers.modeling_bert:Loading TF weight save_counter/.ATTRIBUTES/VARIABLE_VALUE with shape [] INFO:transformers.modeling_bert:Skipping _CHECKPOINTABLE_OBJECT_GRAPH Traceback (most recent call last): File "/home/jbarry/anaconda3/envs/transformers/bin/transformers-cli", line 30, in <module> service.run() File "/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/commands/convert.py", line 62, in run convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output) File "/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch load_tf_weights_in_bert(model, config, tf_checkpoint_path) File "/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/modeling_bert.py", line 118, in load_tf_weights_in_bert assert pointer.shape == array.shape File "/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/torch/nn/modules/module.py", line 585, in __getattr__ type(self).__name__, name)) AttributeError: 'BertForPreTraining' object has no attribute 'shape' ``` This is happening in a fresh environment with PyTorch 1.3 installed in Anaconda (Linux), as well as pip-installing `tf-nightly` and `transformers` (2.3.0). Has anyone else been able to successfully convert the TF 2.0 version models to PyTorch or know where I'm going wrong? Thanks!<|||||>> I'm getting a similar error when trying to convert the newer BERT models released at > [tensorflow/models/tree/master/official/nlp/](https://github.com/tensorflow/models/tree/master/official/nlp/bert#pre-trained-models). > > These models are either BERT models trained with Keras or else checkpoints converted from > the original [google-research/bert](https://github.com/google-research/bert) repository. I also get the same error when I convert the TF1 to TF2 checkpoints myself using the [tf2_encoder_checkpoint_converter.py](https://github.com/tensorflow/models/blob/master/official/nlp/bert/tf2_encoder_checkpoint_converter.py) script: > > What I have tried: > > First, I have downloaded a model: > > ``` > wget https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/cased_L-12_H-768_A-12.tar.gz > # or > wget https://storage.googleapis.com/cloud-tpu-checkpoints/bert/tf_20/cased_L-12_H-768_A-12.tar.gz > ``` > > After unpacking: > > ``` > export BERT_BASE_DIR=cased_L-12_H-768_A-12 > > transformers-cli convert --model_type bert \ > --tf_checkpoint $BERT_BASE_DIR/bert_model.ckpt \ > --config $BERT_BASE_DIR/bert_config.json \ > --pytorch_dump_output $BERT_BASE_DIR/pytorch_model.bin > ``` > > The command prints the configuration but throws the following error: > > ``` > INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_layer/_value_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [768, 12, 64] > INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_layer_norm/beta/.ATTRIBUTES/VARIABLE_VALUE with shape [768] > INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_layer_norm/gamma/.ATTRIBUTES/VARIABLE_VALUE with shape [768] > INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_output_dense/bias/.ATTRIBUTES/VARIABLE_VALUE with shape [768] > INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_output_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [12, 64, 768] > INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_intermediate_dense/bias/.ATTRIBUTES/VARIABLE_VALUE with shape [3072] > INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_intermediate_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [768, 3072] > INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_dense/bias/.ATTRIBUTES/VARIABLE_VALUE with shape [768] > INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [3072, 768] > INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_layer_norm/beta/.ATTRIBUTES/VARIABLE_VALUE with shape [768] > INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_layer_norm/gamma/.ATTRIBUTES/VARIABLE_VALUE with shape [768] > INFO:transformers.modeling_bert:Loading TF weight save_counter/.ATTRIBUTES/VARIABLE_VALUE with shape [] > INFO:transformers.modeling_bert:Skipping _CHECKPOINTABLE_OBJECT_GRAPH > Traceback (most recent call last): > File "/home/jbarry/anaconda3/envs/transformers/bin/transformers-cli", line 30, in <module> > service.run() > File "/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/commands/convert.py", line 62, in run > convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output) > File "/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch > load_tf_weights_in_bert(model, config, tf_checkpoint_path) > File "/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/modeling_bert.py", line 118, in load_tf_weights_in_bert > assert pointer.shape == array.shape > File "/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/torch/nn/modules/module.py", line 585, in __getattr__ > type(self).__name__, name)) > AttributeError: 'BertForPreTraining' object has no attribute 'shape' > ``` > > This is happening in a fresh environment with PyTorch 1.3 installed in Anaconda (Linux), as well as pip-installing `tf-nightly` and `transformers` (2.3.0). > > Has anyone else been able to successfully convert the TF 2.0 version models to PyTorch or know where I'm going wrong? Thanks! ignore those lines causing erros by changing https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L123-L127 to try: assert pointer.shape == array.shape except: pass same thing for https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L129 <|||||>> > > I solve this problem by bypass some variables in the model, such as "bad_steps", “global_step", "good_steps", "loss_scale". They don't have attribute 'shape‘ and I don't need them when fineturning the model. > > In modeling.py, line 121, replace it with > if any(n in ["adam_v", "adam_m", "global_step", "bad_steps", "global_step", "good_steps", "loss_scale"] for n in name): > and delete line 151-156. It helps! Thx u so much <3<|||||>I'm running into the same problem, I tried the solution proposed by @yzhang123 but it only makes me run in another error ``` Traceback (most recent call last): File "C:\Users\lrizzello\AppData\Local\JetBrains\PyCharm 2019.3.4\plugins\python\helpers\pydev\pydevd.py", line 1434, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Users\lrizzello\AppData\Local\JetBrains\PyCharm 2019.3.4\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/source/repos/DeduplicationSiamese/PythonDeduper/eval/eval_matcher_tf.py", line 91, in <module> load_tf_weights_in_bert(model, config, tf_path) File "C:\Users\lrizzello\Anaconda3\envs\dedupe_transformer\lib\site-packages\transformers\modeling_bert.py", line 129, in load_tf_weights_in_bert pointer.data = torch.from_numpy(array) TypeError: expected np.ndarray (got bytes) Process finished with exit code -1 ``` I got those checkpoints by training an existing huggingface model (namely 'google/bert_uncased_L-12_H-256_A-4') via the TFTrainer/TFTrainingArguments method I have tried many other hacks to get this working, such as [this one](https://stackoverflow.com/questions/60539758/biobert-for-keras-version-of-huggingface-transformers) or [this one](https://github.com/huggingface/transformers/issues/676#issuecomment-502101078) but unsuccesfully but nothing worked. I keep running into error after error. Has anyone managed to get this working any other way?<|||||>I managed to get it working by going through the pointers in debug mode and checking what variable name corresponded to what. This is the function I ended up using. ``` def convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path): config_path = os.path.abspath(bert_config_file) tf_path = os.path.abspath(tf_checkpoint_path) print("Converting TensorFlow checkpoint from {} with config at {}".format(tf_path, config_path)) # Load weights from TF model init_vars = tf.train.list_variables(tf_path) excluded = ["BERTAdam", "_power", "global_step", "_CHECKPOINTABLE_OBJECT_GRAPH"] init_vars = list(filter(lambda x: all([True if e not in x[0] else False for e in excluded]), init_vars)) names = [] arrays = [] for name, shape in init_vars: print("Loading TF weight {} with shape {}".format(name, shape)) array = tf.train.load_variable(tf_path, name) names.append(name) arrays.append(array) config = BertConfig.from_json_file(bert_config_file) print("Building PyTorch model from configuration: {}".format(str(config))) # Initialise PyTorch model model = BertForSequenceClassification(config) for name, array in zip(names, arrays): name = name.split("/") # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v # which are not required for using pretrained model if any(n in ["adam_v", "adam_m", "global_step", "bad_steps", "global_step", "good_steps", "loss_scale", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "save_counter", ".OPTIMIZER_SLOT"] for n in name) or \ name[0] == "optimizer": print("Skipping {}".format("/".join(name))) continue if ".OPTIMIZER_SLOT" in name: idx = name.index(".OPTIMIZER_SLOT") name = name[:idx] elif ".ATTRIBUTES" in name: idx = name.index(".ATTRIBUTES") name = name[:idx] print(name) pointer = model for m_name in name: if re.fullmatch(r"[A-Za-z]+_\d+", m_name): scope_names = re.split(r"_(\d+)", m_name) else: scope_names = [m_name] if scope_names[0] == "kernel" or scope_names[0] == "gamma": pointer = getattr(pointer, "weight") elif scope_names[0] == "output_bias" or scope_names[0] == "beta": pointer = getattr(pointer, "bias") elif scope_names[0] == "output_weights": pointer = getattr(pointer, "weight") elif scope_names[0] == "squad": pointer = getattr(pointer, "classifier") elif scope_names[0] == "dense_output" or scope_names[0] == "bert_output": pointer = getattr(pointer, "output") elif scope_names[0] == "self_attention": pointer = getattr(pointer, "self") else: try: pointer = getattr(pointer, scope_names[0]) except AttributeError: logger.info("Skipping {}".format("/".join(name))) continue if len(scope_names) >= 2: num = int(scope_names[1]) pointer = pointer[num] if m_name[-11:] == "_embeddings": pointer = getattr(pointer, "weight") elif m_name == "kernel" or m_name == "gamma" or m_name == "output_weights": array = np.transpose(array) # print("Initialize PyTorch weight {}".format(name)) pointer.data = torch.from_numpy(array) # Save pytorch-model print("Save PyTorch model to {}".format(pytorch_dump_path)) torch.save(model.state_dict(), pytorch_dump_path) convert_tf_checkpoint_to_pytorch(tf_path, config_path, pytorch_dump_path) ``` <|||||>Hi, this is an actual programming error in modeling_bert.py. If you look at line 145 it's pretty obvious that the code should be continuing to the next iteration of the *outer* loop (over name, array) rather than the inner one (over the path components of name) - otherwise why would the error messages say "skipping {name}": https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L145 To fix this, simply extract the try/except block so that it wraps the entire loop (lines 127-148). I would supply a patch but I have to work with transformers 3.5.1 for the moment since I'm using sentence-transformers which hasn't been updated to the latest version.<|||||>@thomwolf If the above fix will be added to the master branch this will be great https://github.com/smartshark/transformers/pull/1 <|||||>> I revised `modeling_bert.by` following @lrizzello 's code and could save tf1 checkpoint I personally trained into pytorch. I first changed tf1 checkpoint to tf2, and then used the below code. Here is the code I revised in `modeling_bert.py` ``` def load_tf_weights_in_bert(model, config, tf_checkpoint_path): try: import re import numpy as np import tensorflow as tf except ImportError: logger.error( "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see " "https://www.tensorflow.org/install/ for installation instructions." ) raise tf_path = os.path.abspath(tf_checkpoint_path) logger.info(f"Converting TensorFlow checkpoint from {tf_path}") # Load weights from TF model init_vars = tf.train.list_variables(tf_path) names = [] arrays = [] for name, shape in init_vars: logger.info(f"Loading TF weight {name} with shape {shape}") array = tf.train.load_variable(tf_path, name) names.append(name) arrays.append(array) for name, array in zip(names, arrays): name = name.split("/") # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v # which are not required for using pretrained model if any( ["adam_v", "adam_m", "global_step", "bad_steps", "global_step", "good_steps", "loss_scale", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "save_counter", ".OPTIMIZER_SLOT"] for n in name) or \ name[0] == "optimizer": # n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"] # for n in name # ): logger.info(f"Skipping {'/'.join(name)}") continue if ".OPTIMIZER_SLOT" in name: idx = name.index(".OPTIMIZER_SLOT") name = name[:idx] elif ".ATTRIBUTES" in name: idx = name.index(".ATTRIBUTES") name = name[:idx] print(name) pointer = model for m_name in name: if re.fullmatch(r"[A-Za-z]+_\d+", m_name): scope_names = re.split(r"_(\d+)", m_name) else: scope_names = [m_name] if scope_names[0] == "kernel" or scope_names[0] == "gamma": pointer = getattr(pointer, "weight") elif scope_names[0] == "output_bias" or scope_names[0] == "beta": pointer = getattr(pointer, "bias") elif scope_names[0] == "output_weights": pointer = getattr(pointer, "weight") elif scope_names[0] == "squad": pointer = getattr(pointer, "classifier") elif scope_names[0] == "dense_output" or scope_names[0] == "bert_output": pointer = getattr(pointer, "output") elif scope_names[0] == "self_attention": pointer = getattr(pointer, "self") else: try: pointer = getattr(pointer, scope_names[0]) except AttributeError: logger.info("Skipping {}".format("/".join(name))) continue if len(scope_names) >= 2: num = int(scope_names[1]) pointer = pointer[num] if m_name[-11:] == "_embeddings": pointer = getattr(pointer, "weight") elif m_name == "kernel" or m_name == "gamma" or m_name == "output_weights": array = np.transpose(array) # try: # if pointer.shape != array.shape: # raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched") # except AssertionError as e: # e.args += (pointer.shape, array.shape) # raise logger.info(f"Initialize PyTorch weight {name}") pointer.data = torch.from_numpy(array) return model ``` For convert_tf_to_pytorch function, I used below. ``` import argparse import os import torch from transformers import BertConfig, BertForPreTraining, load_tf_weights_in_bert from transformers.utils import logging logging.set_verbosity_info() def convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path): # Initialise PyTorch model config = BertConfig.from_json_file(bert_config_file) print(f"Building PyTorch model from configuration: {config}") model = BertForPreTraining(config) # Load weights from tf checkpoint load_tf_weights_in_bert(model, config, tf_checkpoint_path) # Save pytorch-model os.makedirs(pytorch_dump_path) pytorch_dump_path = os.path.join(pytorch_dump_path, '0') print(f"Save PyTorch model to {pytorch_dump_path}") torch.save(model.state_dict(), pytorch_dump_path) if __name__ == "__main__": parser = argparse.ArgumentParser() # Required parameters parser.add_argument( "--tf_checkpoint_path", default=None, type=str, required=True, help="Path to the TensorFlow checkpoint path." ) parser.add_argument( "--bert_config_file", default=None, type=str, required=True, help="The config json file corresponding to the pre-trained BERT model. \n" "This specifies the model architecture.", ) parser.add_argument( "--pytorch_dump_path", default=None, type=str, required=True, help="Path to the output PyTorch model." ) args = parser.parse_args() convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.bert_config_file, args.pytorch_dump_path) ``` Hope this help!<|||||>I use @Jwmc999 's method and successfully converted tensoeflow 2.x ckpt file to Pytorch.bin. Check the below jupyterbook if needed. [Converting-TF-ckpt-to-Pytorch-model](https://github.com/suchunxie/Converting-TF-ckpt-to-Pytorch-model)
transformers
392
closed
Add full language model fine-tuning
These scripts add language model fine-tuning that closely mirrors the training process in the original BERT repo. The old fine-tuning example has been renamed `simple_lm_finetuning.py`. The key difference is the old script did not merge sentences when creating training examples, and so tended to create short training examples with lots of padding tokens - this is explained more fully in the README. The new scripts follow the BERT repo approach, which concatenates sentences from both documents to pack the training example up to `max_seq_len`. All the scripts for LM fine-tuning have been moved to a subfolder of `examples/` with an included README to explain what LM fine-tuning is and how to use them.
03-20-2019 17:48:29
03-20-2019 17:48:29
Also, build_py2 has failed because I deliberately did not include Py2 compatibility - it's only a few months from end-of-life now. If you really want me to, I can go back and include it, but we should be trying to let it go by now!<|||||>This is really great @Rocketknight1! Thanks for taking the time to make a very clear and informative README. I've fixed the formatting issues in the README, slightly re-worded it and added a link to it in the main README. I'm merging it now. Great job!
transformers
391
closed
Reproduce the results on CoLA
I try to reproduce the CoLA results reported in the BERT paper but the numbers are far from the reported one. My best mcc (BERT large) for dev is 64.79% and the test result is 56.9% while the reported test result is 60.5%. The learning rate is 2e-5 and the total number of epochs is 5. For BERT base,the result is also lower by 3-5%. As the paper said, `for BERTLARGE we found that fine-tuning was sometimes unstable on small data sets (i.e., some runs would produce degenerate results), so we ran several random restarts and selected the model that performed best on the Dev set. ` I also tried several restarts with different learning rates and random seeds but it seems no improvement. I'm quite confused for the reproduction. Any suggestions would be greatly appreciated.
03-20-2019 04:22:10
03-20-2019 04:22:10
Cola is probably one of the most unstable tasks for BERT. For us it mostly boiled down to running many seeds. If all you care about is a good pre-trained model checkpoint, we have a 65 / 61 run at https://github.com/zphang/bert_on_stilts <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@cooelf hi, can you reproduce cola score of BERT now? I work on this recently but still can't reach the reported score on the test set, even if restart with different seed multiple times, and I further noticed that improvement on dev set is inconsistent with that on the test set at all.<|||||>Thanks for all the feedbacks! @scissorsy The results of dev and test are indeed incosistent. I have changed the seeds, lr, warmup rates and max_epochs, and finally the test result reached about 60% for some runs. Using multi-tasking or transfer from bigger dataset such as MNLI seems to be more stable. <|||||>> Thanks for all the feedbacks! > @scissorsy The results of dev and test are indeed incosistent. I have changed the seeds, lr, warmup rates and max_epochs, and finally the test result reached about 60% for some runs. Using multi-tasking or transfer from bigger dataset such as MNLI seems to be more stable. Thanks!<|||||>Hi @cooelf, what parameter number did you change in order to fit a better result, thanks!
transformers
390
closed
'NoneType' object with constructor
I run the code below and often get 'NoneType' object. (I usually run multiprocessing) ```python model = BertModel.from_pretrained('bert-base-uncased') tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') ```
03-19-2019 13:16:16
03-19-2019 13:16:16
Me too.It is probably Network connection problem.<|||||>The network connection check has been relaxed in the now merged #500. It will be included in the next PyPI release (probably next week). In the meantime you can install from `master`.<|||||>@thomwolf Thank you.<|||||>The new release is on pypi!
transformers
389
closed
Fix cosine schedule
Fixing similar problem to #327 and #324 in cosine schedule. Btw, do you think it would make sense to have [something like this](https://github.com/lukovnikov/pytorch-pretrained-BERT/blob/optim/pytorch_pretrained_bert/optimization.py) for both of your `optimization.py`'s?
03-18-2019 14:18:05
03-18-2019 14:18:05
Hi @lukovnikov, yes I think the various schedules you have added to your fork are very nice! Do you want to add them in this PR as well? Otherwise, I'll merge it.<|||||>Merging it for now. Thanks @lukovnikov <|||||>Hi, sorry, lost track of this, will make a new PR soon.
transformers
388
closed
Added remaining GLUE tasks to 'run_classifier.py'
Also added metrics used in the GLUE paper for each task.
03-17-2019 12:38:27
03-17-2019 12:38:27
Hi @ananyahjha93, Thanks for this PR. Do you have some results from fine-tuning BERT on the other tasks? Also, I think we should add some details on the available tasks in the readme as well.<|||||>@thomwolf I have added results on GLUE dev set in the README and details on how to run any GLUE task. But, I have also added a warning stating that all GLUE tasks have not been tested with half-precision training. With the new tasks and metrics being added, there should be one round of tests with half-precision training as well. Unfortunately, I do not have direct access to a V100 or any of the RTX cards in order to run that. Also, for some reason the previous pull request was passing each build_py2 test but now it is failing the 'OpenAIGPTTokenizationTest'. I only added changes to the README in my latest commit.<|||||>This looks great, thanks @ananyahjha93!