repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
787
closed
How to use Bert QA model for predictions?
Hi, Can you give sample codes for how to use Bert QA model for predicting an answer given a text corpus and a question?
07-15-2019 00:50:48
07-15-2019 00:50:48
You can write your own code like the prediction phase [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/78462aad6113d50063d8251e27dbaadb7f44fbf0/examples/run_squad.py#L345) <|||||>@Swathygsb have you figured it out? I have the same use case as you and I'm struggling to understand the source code. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
786
closed
New documentation for pytorch-transformers
07-12-2019 09:50:14
07-12-2019 09:50:14
transformers
785
closed
Implementation of 15% words masking would cause the drop of performance in short text
I found the same problem that the implementation is different from tensorflow. If we use the implementation of pytorch will produce two extreme case especially for short sentences like article title,usually 10-20 characters. case 1. sentence with too much '[MASK]' case 2. sentence with none '[MASK]' both case1 and case2 would cause the drop of performance. case 1 make the model difficult to predict and case2 would not produce the loss. Given a corpus with an average sentence length of 10. The implementation of tensorflow would generate 1 '[MASK]' for the sentences, but the implementation of pytorch would have : 0.85^10 = 0.19 to generate 0 '[MASK]' 0.15 * 0.85^9 * 10 =0.34 to generate 1 '[MASK]' 0.15^2 * 0.85^8 * 45 =0.27 to generate 2 '[MASK]' 0.15^2 * 0.85^7 * 120 =0.12 to generate 3 '[MAKS]' ... If we roughly consider the sentence with 15% '[MASK]' is appropriate, only 1/2 '[MASK]' is useful for training models. So only 0.34 + 0.27 = 0.61 training case is useful. And we found it is this is a very serious problem for short text.
07-12-2019 07:50:50
07-12-2019 07:50:50
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
784
closed
[bug] from_pretrained error with from_tf
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L721 , the weights_path should be archive_file, and set from_tf to str is better to load finetuned model, like model name is model.ckpt-25000.meta.
07-12-2019 05:22:08
07-12-2019 05:22:08
Yeah this is solved in the coming release
transformers
783
closed
how to get the word vector from bert pretrain model ?
Could you please help me? I just want to get bert's word vector,but I only can get the encoder's result. How can I get the word vector before data inputing the encoder model ? Thank you !
07-12-2019 01:53:41
07-12-2019 01:53:41
This will be possible in the new release out soon.<|||||>I find a method that can get the words embeddings.Thank you all the same! self.model = BertModel.from_pretrained(config.bert_path) self.word_emb = self.model.embeddings
transformers
782
closed
Why the activation function is tanh in BertPooler
I found the activation function in the BertPooler layer is tanh, but Bert never mentions that it uses the tanh. It says gelu activation function is applied in the paper. So why there is a tanh here ? Waiting for some explanation. Thanks. ``` class BertPooler(nn.Module): def __init__(self, config): super(BertPooler, self).__init__() self.dense = nn.Linear(config.hidden_size, config.hidden_size) self.activation = nn.Tanh() def forward(self, hidden_states): # We "pool" the model by simply taking the hidden state corresponding # to the first token. first_token_tensor = hidden_states[:, 0] pooled_output = self.dense(first_token_tensor) pooled_output = self.activation(pooled_output) return pooled_output ```
07-12-2019 01:23:47
07-12-2019 01:23:47
Because that's what Bert's authors do in the official TF code: https://github.com/google-research/bert/blob/bee6030e31e42a9394ac567da170a89a98d2062f/modeling.py#L231<|||||>Just wanted to point out for future reference the motivation has been answered by the original BERT authors in [[this GitHub issue]](https://github.com/google-research/bert/issues/43).
transformers
781
closed
Clean up input embeddings resizing and weights tying
Still need to add tests on these features
07-11-2019 22:05:10
07-11-2019 22:05:10
I have added a test suite that tests both the `tie_weights` function as well as the `resize_token_embeddings`<|||||># [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=h1) Report > Merging [#781](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=desc) into [xlnet](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/50e62a4cb4d503e3559b88838b8cf9f745fef516?src=pr&el=desc) will **decrease** coverage by `0.23%`. > The diff coverage is `93.05%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## xlnet #781 +/- ## ========================================= - Coverage 78.84% 78.6% -0.24% ========================================= Files 35 34 -1 Lines 6092 6122 +30 ========================================= + Hits 4803 4812 +9 - Misses 1289 1310 +21 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxuZXQucHk=) | `78.79% <100%> (+0.44%)` | :arrow_up: | | [pytorch\_transformers/tests/modeling\_xlnet\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfeGxuZXRfdGVzdC5weQ==) | `95.86% <100%> (+0.08%)` | :arrow_up: | | [...rch\_transformers/tests/modeling\_transfo\_xl\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfdHJhbnNmb194bF90ZXN0LnB5) | `94.33% <100%> (+0.1%)` | :arrow_up: | | [pytorch\_transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxtLnB5) | `86.48% <100%> (+0.3%)` | :arrow_up: | | [pytorch\_transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `87.69% <100%> (+0.17%)` | :arrow_up: | | [pytorch\_transformers/tests/modeling\_gpt2\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfZ3B0Ml90ZXN0LnB5) | `84.21% <66.66%> (-3.29%)` | :arrow_down: | | [pytorch\_transformers/tests/modeling\_openai\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfb3BlbmFpX3Rlc3QucHk=) | `84.21% <66.66%> (-0.79%)` | :arrow_down: | | [pytorch\_transformers/tests/modeling\_xlm\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfeGxtX3Rlc3QucHk=) | `72.13% <75%> (ø)` | :arrow_up: | | [pytorch\_transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `75.62% <75%> (-5.39%)` | :arrow_down: | | [pytorch\_transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `57.32% <76.92%> (+0.32%)` | :arrow_up: | | ... and [14 more](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=footer). Last update [50e62a4...2918b7d](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
780
closed
Fail to run finetune_on_pregenerated.py
Hi, I am fine tuning BERT for my own data set. Pregenerate training data was smooth but when I run finetune_on_pregenerated.py I got the following KeyError: 2019-07-11 22:53:04,151: ***** Running training ***** 2019-07-11 22:53:04,151: Num examples = 35832 2019-07-11 22:53:04,151: Batch size = 32 2019-07-11 22:53:04,152: Num steps = 1119 2019-07-11 22:53:04,156: Loading training examples for epoch 0 Training examples: 0%| | 0/12078 [00:00<?, ?it/s] Traceback (most recent call last): File "finetune-hugging.py", line 348, in <module> main() File "finetune-hugging.py", line 297, in main num_data_epochs=num_data_epochs, reduce_memory=args.reduce_memory) File "finetune-hugging.py", line 105, in __init__ features = convert_example_to_features(example, tokenizer, seq_len) File "finetune-hugging.py", line 43, in convert_example_to_features input_ids = tokenizer.convert_tokens_to_ids(tokens) File "/anaconda3/lib/python3.7/site-packages/pytorch_pretrained_bert/tokenization.py", line 121, in convert_tokens_to_ids ids.append(self.vocab[token]) KeyError: 'Ad' Out[21]: 256 I could really use some help from you guys. Many Thanks!
07-11-2019 22:01:23
07-11-2019 22:01:23
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
779
closed
Should close the SummaryWriter after using it
Really appreciate the good work to implement this package! I have tried to run the script: [run_glue.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/xlnet/examples/run_glue.py). When I test with this script, I found some of the scalars add into the SummaryWriter did not appears in TensorBoard. I think the cause of it is that the code leaves the SummaryWriter unclosed.
07-11-2019 20:12:59
07-11-2019 20:12:59
Oh yes you are right, thanks it's fixed in the coming release.
transformers
778
closed
Order of tokens in vocabulary of German model
The vocabulary for the German model ('bert-base-german-cased') has the token '[unused3001]' at position 0 (and the '[PAD]' token at position 1). However, the BertEmbedding has padding_idx=0 as usual. Is this behaviour intended and if so would it be possible to get some insight into the rationale behind it?
07-11-2019 14:02:16
07-11-2019 14:02:16
@tholor and @timoeller may have some insights on these<|||||>Hey Sebastian, thanks for using the German Bert and digging into its details. The mysterious [unused3001] token was actually a special comma symbol to get rid of [UNK] tokens in some of our training texts. But we covered it up later on in the process + didn't anticipate it would be coming back to us : ) So agreed, it is unwanted behaviour. Though TL;DR, we don't believe it is impacting either pretraining or downstream task training. Apparently the token at index 0 (= [unused3001]) is used as padding token in TF Bert and pytorch Bert and the implementations do not really care if it is called [unused3001] [PAD] or [something]. To be a bit more intuitive we now swapped [unused3001] and [PAD] in the vocab files (pytorch and TF) only. Might be that future code somehow substitutes "[PAD]" input strings, which could cause problems. The only thing that seems worrisome to us is that the embedding values for this padding token are non-zero (and change over the course of training) for our German Bert but also for Googles open sourced models. I tried to check how the padding embedding is handled in TF but am not familiar with debugging there... Maybe you want to dig more into it and raise an issue in the original TF Bert repro? Maybe this closed issue could be related to a rather unwanted padding embedding handling: https://github.com/google-research/bert/issues/113 Hope that helps, good luck!<|||||>Thanks a lot for the input @Timoeller (not quite sure who Christian is, though ;) ). I also got the feeling that it doesn't really impact downstream applications (NER in this case). At least not heavily. I'll do some more experiments and raise an issue with the original repo if it feel it is warranted. Thanks again and all the best, Sebastian <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
777
closed
Working GLUE Example for XLNet (STS-B)
Same as #776 but let's merge it on XLNet for the moment. `run_glue.py` is now a single script able to train BERT, XLNet and XLM on all GLUE tasks. Example for XLNet: ```bash CUDA_VISIBLE_DEVICES=0,1,2,3 python ./examples/run_glue.py --do_train --task_name=sts-b --data_dir=${GLUE_DIR}/STS-B --output_dir=./proc_data/sts-b-110 --max_seq_length=128 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --max_steps=1200 --model_name=xlnet-large-cased --overwrite_output_dir --overwrite_cache --warmup_steps=120 ``` These hyper-parameters (same as the original one) give a pearsonr > 0.918.
07-11-2019 13:43:37
07-11-2019 13:43:37
transformers
776
closed
Working GLUE Example for XLNet (STS-B)
`run_glue.py` is now a single script able to train BERT, XLNet and XLM on all GLUE tasks. Example for XLNet: ```bash CUDA_VISIBLE_DEVICES=0,1,2,3 python ./examples/run_glue.py --do_train --task_name=sts-b --data_dir=${GLUE_DIR}/STS-B --output_dir=./proc_data/sts-b-110 --max_seq_length=128 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --max_steps=1200 --model_name=xlnet-large-cased --overwrite_output_dir --overwrite_cache --warmup_steps=120 ``` These hyper-parameters (same as the original one) give a pearsonr > 0.918.
07-11-2019 13:41:50
07-11-2019 13:41:50
transformers
775
closed
fix typo in readme: extract_classif.py ==> extract_features.py
There seems to be a typo in the `README.md` file in Section `Example` (as shown in the following figure), I guess the script name should be `extract_features.py`. ![123456](https://user-images.githubusercontent.com/2620608/61047755-c865eb80-a412-11e9-9060-3e9f1e423d53.png)
07-11-2019 11:38:03
07-11-2019 11:38:03
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=h1) Report > Merging [#775](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/78462aad6113d50063d8251e27dbaadb7f44fbf0?src=pr&el=desc) will **decrease** coverage by `0.1%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #775 +/- ## ========================================== - Coverage 61.5% 61.39% -0.11% ========================================== Files 19 19 Lines 4026 4025 -1 ========================================== - Hits 2476 2471 -5 - Misses 1550 1554 +4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/file\_utils.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvZmlsZV91dGlscy5weQ==) | `66.44% <0%> (-1.35%)` | :arrow_down: | | [pytorch\_pretrained\_bert/optimization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvb3B0aW1pemF0aW9uLnB5) | `73.52% <0%> (-0.74%)` | :arrow_down: | | [pytorch\_pretrained\_bert/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `81.91% <0%> (-0.54%)` | :arrow_down: | | [pytorch\_pretrained\_bert/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX3RyYW5zZm9feGwucHk=) | `31.85% <0%> (-0.19%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=footer). Last update [78462aa...b72f755](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
774
closed
XLNet text generation ability
Really appreciate the good work to implement XLNet ! I tried running the [XLNet text generation example](https://github.com/huggingface/pytorch-pretrained-BERT/blob/xlnet/examples/generation_xlnet.py) But the generated text quality is really low. Tricks used by https://github.com/rusiaaman/XLnet-gen needs to be added to the example to generate a good example. --- ... Or is it because the pytorch version of XLNet is not fully working yet ?
07-11-2019 02:57:50
07-11-2019 02:57:50
Indeed, I've now added the text padding trick of Aman (add some padding text to have longer inputs) and the quality is really a lot higher. Will merge the xlnet branch in master and release on Monday.
transformers
773
closed
Sphinx doc, XLM Checkpoints
The updated sphinx documentation with additional pages, fixed links, an added a whole new HuggingFace-based theme. Additionally, patched the XLM weights conversion script and added 5 new checkpoints for XLM.
07-10-2019 23:05:25
07-10-2019 23:05:25
transformers
772
closed
Cannot load 'bert-base-german-cased'
`tokenizer = BertTokenizer.from_pretrained('bert-base-german-cased')` **Output:** > Model name 'bert-base-german-cased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'bert-base-german-cased' was a path or url but couldn't find any file associated to this path or url.
07-10-2019 22:48:48
07-10-2019 22:48:48
Hi @laifi, I cannot reproduce this issue. Are you sure that you run with the latest code from master branch? It looks suspicious to me that `tokenizer = BertTokenizer.from_pretrained('bert-base-german-cased')` doesn't find the model. Can you please check if you have [the according line](https://github.com/huggingface/pytorch-pretrained-BERT/blob/78462aad6113d50063d8251e27dbaadb7f44fbf0/pytorch_pretrained_bert/tokenization.py#L37) in your PRETRAINED_VOCAB_ARCHIVE_MAP? For your second approach with downloaded files: - be aware that model packaging changed lately from archives to individual files for vocab, model and config (see [here](https://github.com/huggingface/pytorch-pretrained-BERT/pull/688#issuecomment-502991015)). If you really want to download manually you should download the [.bin](https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-pytorch_model.bin), [bert_config.json](https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-config.json) and the [vocab file](https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-vocab.txt) to a folder called "bert-base-german-cased" - `from_pretrained` expects a model name or path not a .bin . You should try: BertTokenizer.from_pretrained('YOUR_PATH_TO/bert-base-german-cased') Hope that helps!<|||||>Thank you @tholor , i installed the package with pip and i cannot find 'bert-german-cased' in PRETRAINED_VOCAB_ARCHIVE_MAP Now , i tried to reinstall the package from source and it's working . <|||||>@laifi I am keep getting the same error as the one that you got: > UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte I also tried to reinstall it, how did you fix it? <|||||>> @laifi I am keep getting the same error as the one that you got: > > > UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte > > I also tried to reinstall it, how did you fix it? @shaked571 , i have just uninstalled the pip package and installed it again from source (try to not keep any cache for the package). **PS: the issue is fixed in the last migration from pytorch-pretrained-bert to pytorch-transformers .**<|||||>Hi, I also run into the same issue when I try this piece of code in google colab. tokenizer = BertTokenizer.from_pretrained('bert-base-german-cased')<|||||>Hi, I also have the same issue. Using ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-german-cased") ``` solves the problem for me
transformers
771
closed
Performance dramatically drops down without training.
I use run_classivier and run_squad as it is shown in README. If I remove `--do_train` (I already tuned the model and just want to evaluate one more time or with different development set) I expect that result would be the same but performance drops down. For example, SQuAD: with training: `{"exact_match": 81.35288552507096, "f1": 88.49520505241821}` without training: `{"exact_match": 0.21759697256385999, "f1": 7.391520686954715}` I tried my own processor with binary classification (not up to date code though) and without training the value` 1 `only was predicted. Thank you in advance for any comments.
07-10-2019 19:19:38
07-10-2019 19:19:38
If you want to evaluate only, you have to set `--output_dir` to the path of your previously trained model. Otherwise, the script will use the original model.
transformers
770
closed
How can I load a fine-tuned model?
I finetuned a new model by running pregenerate_training_data.py and finetune_on_pregenerated.py and the output is saved as pytorch_model.bin. How do I load the model to run the regular run_classfier,py predictions? To which files do I have to add code?
07-10-2019 16:03:27
07-10-2019 16:03:27
you can use the path to the folder containing your fine-tuned model as `--bert_model`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
769
closed
XLNet tensor at wrong device issuse
```bash File "env.xlnet/lib/python3.6/site-packages/pytorch_transformers/modeling_xlnet.py", line 397, in rel_shift x = torch.index_select(x, 1, torch.arange(klen)) RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index' ``` I meet this issue when using `pytorch-transformers==0.7.0` with multi-gpus, is quickfixed by `x = torch.index_select(x, 1, torch.arange(klen).to(x.device))`
07-10-2019 04:56:24
07-10-2019 04:56:24
This model was WIP. Fixed now.
transformers
768
closed
GPT-2 language model decoding method
I am wondering what is the official decoding method when evaluating the language model? The doc says `run_gpt2.py` implement the beam-search. While to me, it seems it's still greedy search with sampling.
07-09-2019 15:55:04
07-09-2019 15:55:04
`run_gpt2` has top-K which is better than beam-search for high-entropy tasks like open-domain generation. The coming release example (currently on the xlnet branch to be merged with master on Monday) will have top-K and Nucleus sampling (see Holtzman et al. http://arxiv.org/abs/1904.09751)<|||||>Hi, Is it possible to include beam search decoding in ```run_generation.py``` ?<|||||>hope that beam search appears in run_generation.py<|||||>We'll add it. cc @rlouf <|||||>@thomwolf I see that run_generation.py has disappeared and beam_search does not exist anymore, nor in transformers/generate. Where could we find the implementation of batch beam_search in this repo ?<|||||>You can’t... so far. We are reworking the API for greedy decoding and sampling, and will work on beam search afterwards.
transformers
767
closed
Documentation
Sphinx based documentation with Google style comments.
07-09-2019 14:53:16
07-09-2019 14:53:16
transformers
766
closed
Fine tune Xlnet
Can anybody guide me on how to fine tune xlnet for simple text classification task or any reference code because i am lost.
07-09-2019 13:23:07
07-09-2019 13:23:07
As far as I know, the pytorch code of XLNet is not completely ready now. But you could find it in the branch `xlnet` and the classifier code is nearly ready in the file `example/run_xlnet_classifier.py`. I have successfully fine-tuned it on the SST-2 task (which belongs to GLUE) with following args: ```shell python run_xlnet_classifier.py \ --data_dir ..\glue_data\SST-2 \ --task_name sst-2 \ --output_dir sst_model \ --do_train \ --do_eval \ --max_seq_length 128 \ --train_batch_size 64 \ --learning_rate 5e-6 ```<|||||>@SivilTaram Cant be fine tuned on external data ? is the tensorflow version ready ?<|||||>@AhmedBahaaElDinMohammed Sure you could fine-tune it on external data, which means you should process your data and construct train/validate `examples` as SST-2 does. You could see `example/utlis_glue.py` for more details to handle your external data :) The tensorflow is ready, you could refer to the original repo for help. This repo is only for pytorch version, thanks.<|||||>> I have successfully fine-tuned it on the SST-2 task (which belongs to GLUE) @SivilTaram Would it be possible to fine-tune it on SQuAD 2.0? or alternatively, convert a fine-tuned model from the original repo/tensorflow?<|||||>@edanweis Not ready now. Please wait the author to complete the awesome work :) Or you could watch the updates of PR [here](https://github.com/huggingface/pytorch-pretrained-BERT/pull/711).<|||||>Has anyone tried fp16 for xlnet? I tried it and found that the memory was half, but it was slower than fp32(even when I used the same GPU memory). Environment: v100, cuda 10.0, torch 1.1 The environment is ok, because I tried bert + fp16 and it was much faster than fp32. I thought it is the problem of torch.einsum, but I am not that sure. Guys, do you have the same problem ?<|||||>@SivilTaram Following the latest release 0.6.2, I am trying to convert my tf checkpoints: ``` export TRANSFO_XL_CHECKPOINT_PATH=home/edanweis/xlnet/model/squad export TRANSFO_XL_CONFIG_PATH=home/edanweis/xlnet/model/squad export FINETUNING_TASK=squad pytorch_transformers xlnet \ $TRANSFO_XL_CHECKPOINT_PATH \ $TRANSFO_XL_CONFIG_PATH \ $PYTORCH_DUMP_OUTPUT \ $FINETUNING_TASK \ ``` But getting `pytorch_transformers/__main__.py", line 111, in main FINETUNING_TASK) UnboundLocalError: local variable 'FINETUNING_TASK' referenced before assignment`<|||||>@SivilTaram Did you try to finetune XLNet with the last code (release 1.0) using examples/run_glue.py? Everything works but accuracy didn't change and every time is around 0.50? It looks like it didn't train at all. I used the following script: ``` export GLUE_DIR=/path/to/glue python ./examples/run_glue.py \ --model_type xlnet \ --model_name_or_path xlnet-large-cased \ --do_train \ --do_eval \ --evaluate_during_training \ --logging_steps 500 \ --save_steps 1000 \ --task_name=sst-2 \ --data_dir=${GLUE_DIR}/SST-2 \ --output_dir=./proc_data/sst-2 \ --max_seq_length=128 \ --per_gpu_eval_batch_size=8 \ --per_gpu_train_batch_size=8 \ --gradient_accumulation_steps=1 \ --max_steps=8000 \ --model_name=xlnet-large-cased \ --overwrite_output_dir \ --overwrite_cache \ --warmup_steps=120 ```<|||||>@avostryakov I do not yet. I guess you could explore if the loss decrease as expected? There should be loss logs, along with tensorboard logs.<|||||>@SivilTaram Evaluation loss isn't changed, training loss is increased. It looks like something wrong with the optimization process during training.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
765
closed
Is is possible to fine-tune GPT2 on downstream tasks currently?
Is is possible to fine-tune GPT2 on downstream tasks currently?
07-09-2019 00:16:41
07-09-2019 00:16:41
Yes we could add this. You mean tasks like GLUE or SQuAD?<|||||>> Yes we could add this. You mean tasks like GLUE or SQuAD? Yes! exactly! Please add this, thanks!<|||||>@thomwolf Are you still working on the code to finetune the GPT2 language model (not classification task)? Thanks.<|||||>@experiencor @thomwolf also curious about GPT2 LM finetuning issue, thanks!<|||||>We'll add an example for fine-tuning the models (probably refactor the Bert's one at the same time) this month.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> We'll add an example for fine-tuning the models (probably refactor the Bert's one at the same time) this month. Did you add the example ? looking for an example of fine tuning gpt-2 for downstream tasks.<|||||>+1. Also interested.
transformers
764
closed
Adding extra inputs when fine-tuning BERT
I am trying to fine-tune BERT for a sequence classification task where in addition to the sequences, I have extra features such as the writer age, tags, etc. I want to use those extra features, and I was thinking about concatenating them to the input of the final linear layer. Is there a way of doing such a thing? If not, what is the best way for integrating extra features in the fine-tuning process?
07-08-2019 15:55:58
07-08-2019 15:55:58
You could try stacking a linear layer over-top of BERT that takes as input the BERT sequence representation + your features. You would have to fine-tune through all of BERT.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> I am trying to fine-tune BERT for a sequence classification task where in addition to the sequences, I have extra features such as the writer age, tags, etc. I want to use those extra features, and I was thinking about concatenating them to the input of the final linear layer. > Is there a way of doing such a thing? If not, what is the best way for integrating extra features in the fine-tuning process? Did you find a way to add extra features then fine-tuning BERT?
transformers
763
closed
''bert-large-uncased-whole-word-masking-finetuned-squad' CAN'T be reached.
'bert-large-uncased-whole-word-masking-finetuned-squad' can't be reached from the addr in tokenization.py: https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-vocab.txt
07-08-2019 09:24:16
07-08-2019 09:24:16
also ran into this. I think they forgot to upload the file/make it public. You can find the vocab file on the original google repo https://github.com/google-research/bert<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
762
closed
randrange() error when running pregenerate_training_data.py code in lm_finetuning
Hi, I am trying to run pregenerate_training_data.py code in lm_finetuning using a text file which has two documents ( each document has around 200 sentences ) I ran into this error: ``` Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex. Loading Dataset: 399 lines [00:00, 2214.07 lines/s] Epoch: 0%| | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): | 0/1 [00:00<?, ?it/s] File "pregenerate_training_data.py", line 292, in <module> main() File "pregenerate_training_data.py", line 277, in main vocab_list=vocab_list) File "pregenerate_training_data.py", line 187, in create_instances_from_document random_document = doc_database.sample_doc(current_idx=doc_idx, sentence_weighted=True) File "pregenerate_training_data.py", line 52, in sample_doc sentence_index = randint(rand_start, rand_end-1) % self.cumsum_max File "/home/cloud/anaconda3/lib/python3.6/random.py", line 221, in randint return self.randrange(a, b+1) File "/home/cloud/anaconda3/lib/python3.6/random.py", line 199, in randrange raise ValueError("empty range for randrange() (%d,%d, %d)" % (istart, istop, width)) ValueError: empty range for randrange() (198,198, 0) ``` command line prompt looks like `python pregenerate_training_data.py --train_corpus=./ack_belief_training_testing/ack_belief_all_categories_data2.txt --bert_model=bert-base-uncased --do_lower_case --output_dir=./ack_belief_training_testing/pytorch_gen_data/ack_belief_all_categories_data2_train_data_3epochs/ --epochs_to_generate=3` What can be the issue? Can somebody help me through this?
07-08-2019 04:45:29
07-08-2019 04:45:29
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
761
closed
Help loading BioBERT weights
I have completed the following: **1. Downloaded pretrained BioBERT weights from their current release** **2. Convert TensorFlow checkpoints into Pytorch weights bin file using the following code** import os os.system( ' pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \ "/content/biobert_v1.1_pubmed/model.ckpt.index" \ "/content/biobert_v1.1_pubmed/bert_config.json" \ "/content/biobert_pytorch.bin" ' ) ` **3. I then tried to test whether I can load these weights. In order to do so, I tried the following code** state_dict = torch.load( "/content/biobert_pytorch.bin" ) model.load_state_dict(state_dict) **but I get the error** > IncompatibleKeys(missing_keys=[], unexpected_keys=[]) ***Please guide me***
07-07-2019 23:23:36
07-07-2019 23:23:36
Not really providing a solution here, but have you considered https://github.com/allenai/scibert instead? AllenAI provides PyTorch weights, and through tests they claim their model is superior https://arxiv.org/pdf/1903.10676.pdf on their suite of tasks. For that and for ease of use, it may be a valid alternative.<|||||>Hello, I am trying to figure out how to load the SciBert weights. I see that you can use ``` # Simple serialization for models and tokenizers model.save_pretrained('./directory/to/save/') # save model = model_class.from_pretrained('./directory/to/save/') # re-load tokenizer.save_pretrained('./directory/to/save/') # save ``` So my guess is to download them from here https://github.com/allenai/scibert#pytorch-models Untar them, then point to that directory ``` model = model_class.from_pretrained('DIRECTORY/TO/DOWNLOADED/UNZIPPED/SCIBERT/Pytorch.bin') ``` Unless there is more to that, the part I am confused about is that SciBert also has it's own vocab and 'vocab.txt' file. I am wondering how to point to that file, and not the default one. Edit Found the answer https://github.com/huggingface/pytorch-transformers/issues/69#issuecomment-443215315 you can just do a direct path to it<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
760
closed
Simple LM finetuning falls with RunTime Error: CUDA out of memory
I tried to run simple_lm_finetuning.py on my own data with multi lingual uncased model, and the script breaks down with error 'CUDA out of memory'. Can anyone say what should I do in this situation? I've already decreased batch size from 32 to 2, but even then I get this error. ![image](https://user-images.githubusercontent.com/37251686/60744362-3ecf9d00-9f7e-11e9-8c3c-7abacce122e6.png) Offtop: does anybody know, if I can use my own pretrained model in this script instead of one of listed ones?
07-05-2019 20:41:42
07-05-2019 20:41:42
How much memory does your GPU have. You can check this by running `nvidia-smi`.<|||||>I also has same phenomena. Also, the learning time become slower and much GPU consumption occur, both of which I think is natural, regarding parameters BERT has. The substitutional way is that, no fine-tuning and dump. I mean, feed your sequence to Bert and dump your layers. In the training process of your task, simply load sequence with your dumped result of Bert.<|||||>Try `--reduce_memory `, it worked on mine with base multilingual uncased BERT with batch size= 4 on my single 2080Ti. ``` python3 finetune_on_pregenerated.py --pregenerated_data training/ --bert_model bert-base-multilingual-uncased --do_lower_case --output_dir finetuned_lm/ --epochs 3 --reduce_memory --train_batch_size 4 ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
759
closed
Release 0.7: pytorch-pretrained-bert => pytorch-transformers
Name change: `pytorch-pretrained-bert` => `pytorch-transformers` Standardize tokenization + tests Refactor examples and add tests for examples as well
07-05-2019 10:32:52
07-05-2019 10:32:52
transformers
758
closed
Release 0.7 - Add doc
Like #757 but let's point on the `xlnet` branch for now.
07-04-2019 15:09:27
07-04-2019 15:09:27
transformers
757
closed
Release 0.7 - Add a real doc
07-04-2019 15:06:54
07-04-2019 15:06:54
transformers
756
closed
Invalid Syntax Error trying to run pregenerate_training_data.py
I'm getting such error, can't understand what's wrong ![image](https://user-images.githubusercontent.com/37251686/60671996-072cfc00-9e7d-11e9-9b73-b8b1f01652c9.png) Plus, is it possible to further fine tune once fine tuned model, that appears after simple finetuning .py in corresponding folder (pytorch_model.bin) ?
07-04-2019 14:00:56
07-04-2019 14:00:56
Your snippet is too short to see what type of error there is, can you extract a larger one?<|||||>Hi yes ![image](https://user-images.githubusercontent.com/37251686/60735756-30bd5480-9f5d-11e9-940c-d39c73cb4084.png) <|||||>Pull the latest changes from master and report if that helps.<|||||>it has been disappeared in pregenerate script, and appeared in finetuning on pregenerated data script ![image](https://user-images.githubusercontent.com/37251686/60804690-b708af00-a186-11e9-9e21-024e745c8bdf.png) <|||||>You are getting these errors because you are using Python 3.5 and the code is making use of f-strings which are introduced in Python 3.6. You could try using Python 3.6 or change the source code replacing f-string with str.format syntax. @thomwolf The readme says repository supports Python 3.5+. Does that mean Python 3.5 is supported as well? If yes, I think we should change f-strings for the format syntax. <|||||>Well, only the library code supports Python 3.5+, I don't check the examples which are mostly contributed by the community. If you want to fix Python 3.5 support for the examples I'm happy to welcome a PR. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
755
closed
TorchScript trace comparison with different sizes
Adds a test to compare TorchScript traces with different batch sizes and sequences lengths.
07-03-2019 22:05:09
07-03-2019 22:05:09
transformers
754
closed
Get Attention Values for Pretrained Model
When using BertModel.from_pretrained, I am not able to have it also return the attention layers. Why does that not word? Am I doing something wrong?
07-03-2019 21:45:56
07-03-2019 21:45:56
You need to install the master version (not with pip or conda) : ``` git clone https://github.com/huggingface/pytorch-pretrained-BERT.git cd pytorch-pretrained-BERT python setup.py install ``` Then you can use it like this : ``` model = BertModel.from_pretrained('bert-base-uncased', output_attentions=True, keep_multihead_output=True) model.eval() # turn off dropout layers attn = model(tokens)[0] ``` Tell me if I'm misinterpreting your problem<|||||>Thank you a lot for the help, I didn't expect this to only work on the current release. However, I think with this I found a problem in the BERT encoder module: ``` def forward(self, hidden_states, attention_mask, output_all_encoded_layers=True, head_mask=None): all_encoder_layers = [] all_attentions = [] for i, layer_module in enumerate(self.layer): hidden_states = layer_module(hidden_states, attention_mask, head_mask[i]) ``` The forward function by default gets `None` for the `head_mask` parameter. Then, however, it indexes it, which causes an error. I think it would be nice to handle this case.<|||||>Hi. I want to do something similar but with the **BertForQuestionAnswering** model. The BertModel is the general BERT model that is used to classify whether a sentence is the next sentence or not. I want to get the attention values for QuestionAnswering while I pass a new paragraph and a question as inputs. I want to use the **BertForQuestionAnswering** model (which is pretrained on SQuAD if I am not wrong) and get the self-attention values on the question words. Is it possible to achieve this in a similar way as mentioned above? **NOTE:** I know the above method gives attention values of the pre-trained model. I want to get attention values of the model when I feed a new input question to the model. Something similar to what can be done using [BertViz](https://github.com/jessevig/bertviz) (although I do not want to visualize attention, just want to get the values). Thanks.<|||||>Hi, this will be in the next release (release date sometime next week). There will be attention/hidden-state output options for all the models.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
753
closed
`bert-base-uncased` works for CoLA, `bert-large-uncased` always predicts one class
I'm having an issue with CoLA where finetuning off of bert-large results in a model that only predicts one class. I make one change in configs to train the large model - I set `train_batch_size` to `16` for `bert-large-uncased`. These are the two training commands I use (missing do_lowercase, I know, but it's forced anyways): ``` python ./run_classifier.py \ --task_name CoLA \ --do_train \ --data_dir ./data/cola/ \ --bert_model bert-large-uncased \ --output_dir ./out/cola-finetune-large-uncased/ ``` ``` python ./run_classifier.py \ --task_name CoLA \ --do_train \ --data_dir ./data/cola/ \ --bert_model bert-base-uncased \ --output_dir ./out/cola-finetune-base-uncased/ ``` --- Now here are the results I get with base vs large models: Here are my results evaluating with plain bert-large-uncased ``` eval_loss = 0.6977422047745098 mcc = -0.05490997894843018 ``` Here are the results with bert-base-uncased ``` eval_loss = 1.013142795273752 mcc = 0.02904813156816523 ``` Now here are the results with fine-tuning on bert-base-uncased: ``` eval_loss = 0.590644522203189 mcc = 0.5313406823271718 ``` Pretty much reflects what the paper says, nice. But when I do the exact same process, training on bert-large-uncased and with a slightly smaller batch size (16 instead of 32) b/c of GPU memory limitations, I get these results: ``` eval_loss = 0.61876411058686 mcc = 0.0 ``` Just to be clear, in these examples I only change the `bert_model` flag from `bert-large-uncased` to `bert-base-uncased` and change the batch size when training, no other changes at all. I feel I must be doing something wrong. I'm using `run_classifier.py` from this repo. Any ideas what could be the problem? I've read there's some instability with BERT-Large for small datasets such as CoLA, but surely it doesn't degenerate this much? If I understand correctly mcc = 0 indicates random guessing... Or is that actually the case, and I just need to run it more times and cross my fingers for a non-degenerate run?
07-03-2019 21:23:12
07-03-2019 21:23:12
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
752
closed
how to set the init learning rate when use bertAdam?
i set the BertAdam learning rate as the default value of args (3e-5), and i step in to the BertAdam step by step , and print lr_scheduled see that the acturly lr is very small over all the training process (between <0 ~ 1> * 3e-5), this cause the loss decrease very slow, when i set the init learning rate as 0.1, the loss decrease much more fast, so what's the right way to set the learning rate for BertAdam param?
07-03-2019 06:57:04
07-03-2019 06:57:04
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
751
closed
Slower and more memory hungry than the TensorFlow BERT?
Hi pytorch-pretrained-BERT developers, I have been using TensorFlow BERT since it came out, recently I wanted to switch to PyTorch because it is a great library. For this, I did a bunch of tests to compare training specs between Google's TF BERT and your implementation. To my surprise, this is a lot slower and can only afford small batch size before OOM error. I really want to know if this is a correct observation because I was really hoping to transition to PyTorch. Here is my setup: 1. Custom size of 3 layer by 320 hidden dimension. 2. English uncased vocab. 3. Sequence length is set to be constant 125. 4. Running on Tesla P40. 5. Running finetune_on_pregenerated.py 6. I changed finetune_on_pregenerated.py a little to just initialize a blank model of my size. Speed difference: * TensorFlow: 809 sentences/s on 1 GPU. * TensorFlow: 2350 sentences/s on 4 GPUs. * PyTorch: 275 sentences/s on 1 GPU. * PyTorch: 991 sentences/s on 4 GPUs. Memory: * My P40 has 22GB memory. * TensorFlow can run batch size of 1000 or more (didn't probe upper limit). * PyTorch is OOM for batch size 250 or above. OK with 125. * I ran 30 epochs on a test data set of only 17MB. It shouldn't be a data loading problem. I want to know if there is anything that I could have done wrong? Thank you very much! n
07-03-2019 00:30:47
07-03-2019 00:30:47
Yes, this library is not made for training a model from scratch. You should use one of the libraries I referred to here: https://github.com/huggingface/pytorch-pretrained-BERT/issues/543#issuecomment-491207121 I might give it a look one day but not in the short-term.<|||||>@thomwolf Thank you so much for the info! :) Just to share, I quickly did a benchmark of XLM (this one fits my needs the most out of your three recommendations). **Sentences/s (for the specs I mentioned above):** Batch size | Official TF BERT | HuggingFace PyTorch BERT | XLM PyTorch BERT -- | -- | -- | -- 128 over 1 GPU | 610 | 288 | 575 250 over 1 GPU | 647 | OOM | 625 500 over 1 GPU | 665 | OOM | 650 700 over 1 GPU | N/A | OOM | OOM 900 over 1 GPU | 667 | OOM | OOM 1000 over 1 GPU | OOM | OOM | OOM 128 over 4 GPUs | 889 (1.5x) | 779 (2.7x) | N/A 512 over 4 GPUs | 1522 (2.3x) | 1018 (3.?x) | N/A 1000 over 4 GPUs | 1798 (2.?x) | OOM | N/A 2000 over 4 GPUs | 1946 (2.?x) | OOM | N/A 3600 over 4 GPUs | 1991 (3.0x) | OOM | N/A 4000 over 4 GPUs | OOM | OOM | N/A Note: Only spent 2 hours on XLM, not sure if I set the vocab to be exactly the same size as the others, but they should be in the same ballpark. I haven't got a chance to benchmark the multi-GPU XLM. But in general, it looks like: 1. The TensorFlow implementation uses memory more efficiently. 2. PyTorch's multi-GPU scaling seems better. 3. PyTorch itself is not slower than TF. n<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi @thomwolf , I was trying to fine-tune pytorch-transformers's gpt2 (124M) on a V100 16GB GPU. But I am not able to accommodate more than the batch_size of 2. I am using seq-length of 1024 tokens. This might be evident from above comments but I am new to training NNs so wanted to confirm if fine tuning would also cause OOM as in training from scratch? If so, then is only option available to finetune gpt2 is to use original tensorfolow implementation? Thanks<|||||>Hi @SKRohit, with the GPT-2 model you can either fine-tune it with a batch size of 4 and a sequence of 512 tokens, or a batch size of 2 and a sequence of 1024 tokens, like what you've tried. We have had good results with a batch size of 4 and a sequence of 512 in our experiments. If you want a bigger batch size, you can set up gradient accumulation, which would allow you to put larger to much larger batch sizes. You can find an example of gradient accumulation applied to fine-tuning in our [language model fine-tuning example](https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_lm_finetuning.py).<|||||>Yes, @LysandreJik I am using gradient accumulation. I found max possible batch_size = 2 to be too small given this [comment](https://github.com/openai/gpt-2/issues/150#issuecomment-529153176) so asked to make sure there is no error in my code or any issue with my gcloud gpu. Also, have you finetuned gpt2 architectures using mixed_precision (mp) training? Did you find any difference in performance of mp trained gpt2 in comparison to without mp? And I am referring to fine-tuning script provided in `pytorch_transformers` repo 👍 . Thanks. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>mark<|||||>@thomwolf What is the bottleneck in HuggingFace transformers pretraining comparing to Tensorflow and other PyTorch implementations?<|||||>I also find the Transformers library to be more memory hungry. It seems to be even slower with Pytorch than TF, too. On the flip side, it is really easy to use. I guess if you have big datasets and 2x slower is critically insufficient, it's not a good option. But if the difference is just half a day or less, it may not be that bad.
transformers
750
closed
Incorrect training loss scaling factor in examples/run_classifier.py?
In [examples/run_classifier.py](https://github.com/huggingface/pytorch-pretrained-BERT/commit/87b9ec3843f7f9a81253075f92c9e6537ecefe1c), the overall 'loss' is produce as 'tr_loss/global_step' (instead of 'tr_loss/nb_tr_steps'). Is this behavior correct? @mprouveur made the change in this [commit](https://github.com/huggingface/pytorch-pretrained-BERT/commit/87b9ec3843f7f9a81253075f92c9e6537ecefe1c). I'm wondering as 'global_step' is never reset after a training epoch, while tr_loss is reset every training epoch. So even if 'tr_loss' remains constant, 'loss' will decrease over more training iterations, given the increasingly large denominator ('global_step'). If this is correct, maybe the 'nb_tr_steps' variable should be removed? It looks unused throughout the code currently. At any rate, it's only a 2-line fix, and it only affects the logging behavior I believe.
07-02-2019 20:39:55
07-02-2019 20:39:55
You are right Ethan. I'm refactoring the examples which were a bit rotten, let's include this fix as well.<|||||>Great! Either way the examples are a great starting point :) I'm also wondering if tensorboard is [only logging](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L345) the training loss for the last forward pass for a batch (if several are required / when using gradient accumulation)? A fix would be to maintain a variable ```tr_batch_loss``` (similar to ```tr_loss```) for each full training batch (reset after each parameter update) and log that instead.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
749
closed
Attribute Error : 'BertModel' object has no attribute 'bert'
I am using Google's Bert tensorflow checkpoints to create a model from .from_pretrained as shown below- ` model = BertModel.from_pretrained('/content/uncased_L-12_H-768_A-12',from_tf=True) ` But I am getting the following error- ` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-9-63e759d1aab4> in <module>() 1 bert_version = 'bert-base-uncased' ----> 2 model = BertModel.from_pretrained('/content/uncased_L-12_H-768_A-12',from_tf=True) 3 tokenizer = BertTokenizer.from_pretrained(bert_version) 4 sentence_a = "I went to the store." 5 sentence_b = "At the store, I bought fresh strawberries." 2 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 537 return modules[name] 538 raise AttributeError("'{}' object has no attribute '{}'".format( --> 539 type(self).__name__, name)) 540 541 def __setattr__(self, name, value): AttributeError: 'BertModel' object has no attribute 'bert' ` The upper Attribute error code is follows after loading the bert layers like below - ` Converting TensorFlow checkpoint from /content/uncased_L-12_H-768_A-12/model.ckpt Loading TF weight bert/embeddings/LayerNorm/beta with shape [768] Loading TF weight bert/embeddings/LayerNorm/gamma with shape [768] Loading TF weight bert/embeddings/position_embeddings with shape [512, 768] Loading TF weight bert/embeddings/token_type_embeddings with shape [2, 768] Loading TF weight bert/embeddings/word_embeddings with shape [30522, 768] Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_0/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_0/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_0/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_0/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_0/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_0/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_0/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_0/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_0/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_0/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_0/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_1/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_1/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_1/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_1/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_1/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_1/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_1/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_1/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_1/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_1/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_1/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_10/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_10/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_10/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_10/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_10/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_10/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_10/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_10/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_10/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_10/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_10/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_11/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_11/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_11/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_11/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_11/attention/self/value/kernel with shape [768, 768] ` Can somebody help find the problem ?
07-02-2019 17:45:04
07-02-2019 17:45:04
You can only load from a tensorflow checkpoint in a `BertForPretraing` model. I will add a check. Alternatively, you should use the conversion script to make a pytorch model and then you can import the resulting pytorch model in any type of Bert model.<|||||>I use the BertForPretraining.from_pretrained().bert to get the BertModel from the tensorflow checkpoint, I think it is useful<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>You are probably using the "wrong bert" . Install bert-tensorflow. There are two packages with the same name <|||||>same for distilbert. Using base_model.distilbert will solve the problem
transformers
748
closed
Release 0.7 - Add Torchscript capabilities
Add Torchscript capabilities to all models.
07-02-2019 14:42:54
07-02-2019 14:42:54
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=h1) Report > Merging [#748](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=desc) into [xlnet](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/708877958a308a0f0e8fd199f8f327e4797f1583?src=pr&el=desc) will **increase** coverage by `0.21%`. > The diff coverage is `96.06%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## xlnet #748 +/- ## ========================================= + Coverage 71.5% 71.72% +0.21% ========================================= Files 35 35 Lines 5587 5633 +46 ========================================= + Hits 3995 4040 +45 - Misses 1592 1593 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [...\_pretrained\_bert/tests/modeling\_transfo\_xl\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdGVzdHMvbW9kZWxpbmdfdHJhbnNmb194bF90ZXN0LnB5) | `94.23% <100%> (ø)` | :arrow_up: | | [pytorch\_pretrained\_bert/model\_utils.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxfdXRpbHMucHk=) | `92.61% <100%> (+0.04%)` | :arrow_up: | | [pytorch\_pretrained\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `87.55% <100%> (+0.47%)` | :arrow_up: | | [pytorch\_pretrained\_bert/modeling\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfb3BlbmFpLnB5) | `79.5% <100%> (+0.11%)` | :arrow_up: | | [pytorch\_pretrained\_bert/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `81.57% <100%> (+0.15%)` | :arrow_up: | | [pytorch\_pretrained\_bert/modeling\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfeGxuZXQucHk=) | `74.17% <83.33%> (+0.14%)` | :arrow_up: | | [...torch\_pretrained\_bert/tests/model\_tests\_commons.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdGVzdHMvbW9kZWxfdGVzdHNfY29tbW9ucy5weQ==) | `97.08% <97.22%> (-0.01%)` | :arrow_down: | | [pytorch\_pretrained\_bert/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `82.44% <0%> (-1.07%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=footer). Last update [7088779...b43b130](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
747
closed
BERT pretraining routine
Hi, I was wondering whether the scripts for finetuning can be used to pretrain BERT from scratch on a small dataset that does not require TPUs - is there any difference with the TF pretrain code (different batch sampling or train loss evaluation) other than the TPU support? Thank you very much in advance!
07-02-2019 10:14:06
07-02-2019 10:14:06
I would not advise to use them for training from scratch. See #751 for discussion and links.
transformers
746
closed
GPT2Tokenizer for Hindi Data
I was trying to fine-tune GPT2LMHeadModel with Hindi data corpus. It is performing well. But when I looked at the tokens that are generated from the GPT2Tokenizer, I saw that they are containing tokens of almost character level. I am not understanding how is this kind of encoding handling Hindi data, or any form of non Roman script correctly. Can anyone explain the working of GPT2Tokenizer from this aspect?
07-02-2019 09:22:14
07-02-2019 09:22:14
I might be wrong, but I think GPT2Tokenizer uses byte pair encoding, a form of subword-level encoding. On an intuitive level, this is a between character-level and word level, and akin to breaking the word apart by syllable (in reality it's breaking the word apart by the highest frequency patterns). I know some people use Sentence-Piece tokenization for working with Chinese in BERT, so it might be worthwhile to see if there's a similar effort for a Sentence-Piece GPT-2<|||||>@DEBADRIBASAK Can you share the steps for fine-tuning on hindi dataset? Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
745
closed
fix evaluation bug
The original `run_squad.py` has a potential bug. If we only want to run the script to do evaluation, the model will not be properly loaded. The simple fix is provided.
07-01-2019 21:58:39
07-01-2019 21:58:39
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=h1) Report > Merging [#745](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/dad3c7a485b7ffc6fd2766f349e6ee845ecc2eee?src=pr&el=desc) will **decrease** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #745 +/- ## ========================================== - Coverage 62.27% 62.22% -0.06% ========================================== Files 18 18 Lines 3979 3979 ========================================== - Hits 2478 2476 -2 - Misses 1501 1503 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `82.44% <0%> (-1.07%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=footer). Last update [dad3c7a...64b2a82](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
744
closed
Recommended multilingual bert cased model returns similar embeddings
I'm trying to get embeddings for multilingual input: ``` tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-cased", do_lower_case=False) class NeuralNet(BertPreTrainedModel): def __init__(self, config): super(NeuralNet, self).__init__(config) self.bert = BertModel(config) self.apply(self.init_bert_weights) def forward(self, input_ids, token_type_ids=None, attention_mask=None): _, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False) return pooled_output model = NeuralNet.from_pretrained("bert-base-multilingual-cased") ``` and for some reason all `pooled_output` vectors are very similar with 1e-3 cosine distance for semantically different inputs. Changing model to `bert-base-multilingual-UNcased` works just okay. Any ideas?
07-01-2019 13:24:04
07-01-2019 13:24:04
I second this issue #735 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
743
closed
Cannot reproduce results from version 0.4.0
Hi, I have a research project that I did a few months ago. Now I have problem reproducing results of 0.4.0, and unfortunately, I lost version 0.4.0. Can you please send me the code of this version to [email protected]? In fact, I am not quite sure it's 0.4.0, but I remember I did it in March 2019.
06-30-2019 14:22:07
06-30-2019 14:22:07
`pip install pytorch-pretrained-bert==0.4.0` should work normally<|||||>Though if you did it with the latest release in March 2019 it was probably more 0.6.1 (see the list and dates here: https://github.com/huggingface/pytorch-pretrained-BERT/releases) so `pip install pytorch-pretrained-bert==0.6.1`<|||||>Thank you! On Sun, Jun 30, 2019, 08:40 Thomas Wolf <[email protected]> wrote: > Though if you did it with the latest release in March 2019 it was probably > more 0.6.1 (see the list and dates here: > https://github.com/huggingface/pytorch-pretrained-BERT/releases) so pip > install pytorch-pretrained-bert==0.6.1 > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-pretrained-BERT/issues/743?email_source=notifications&email_token=AEX53CZ4RLMCBSQFO2445QDP5DAWZA5CNFSM4H4MVPT2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODY4NP4Y#issuecomment-507041779>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AEX53C7IV2A7DRL7723V573P5DAWZANCNFSM4H4MVPTQ> > . >
transformers
742
closed
When not loading a pretrained model, all layers are initialized with copies of the same weights
Although this repo is mostly used for loading and training pre-trained BERT models, the code does support model initialization too! However, I found an issue with the initialization code - because it just makes one layer and copies it, the weights will be identical across all layers at initialization. This probably isn't fatal, since they'll hopefully diverge over time, but it seems a bit odd and it isn't how the [Google BERT repo does it](https://github.com/google-research/bert/blob/master/modeling.py#L827-L882). I replaced the copies with separate layer initializations instead, which should fix this problem.
06-29-2019 14:12:38
06-29-2019 14:12:38
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=h1) Report > Merging [#742](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/dad3c7a485b7ffc6fd2766f349e6ee845ecc2eee?src=pr&el=desc) will **decrease** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #742 +/- ## ========================================== - Coverage 62.27% 62.26% -0.01% ========================================== Files 18 18 Lines 3979 3978 -1 ========================================== - Hits 2478 2477 -1 Misses 1501 1501 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `79.46% <100%> (-0.04%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=footer). Last update [dad3c7a...2c03c10](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Oops, my bad - I just realized you initialize weights in `BertModel` after creating them. Never mind!
transformers
741
closed
Using BertForNextSentencePrediction and GPT2LMHeadModel in a GAN setup.
I am using the following code (**Training Loop**) as the meat of the training loop whereby the discriminator is BertForNextSentencePrediction and the generator is GPT2LMHeadModel. I have also included the structure of the training data (**Input data:**). The loss in the generator and discriminator appear to be falling correctly, but I have been unable to test successfully whether the model weights are being updated each epoch. This is the section that I am concerned about correctly updating the weights of the generator: ``` #g_loss is discriminator loss of real_sentence and generated next sentence # Set generator to train mode self.generator.train() # Backward propagation g_loss.backward() if (step + 1) % self.accumulation_steps == 0: self.gpt2_optimizer.step() ``` Would also love to know the most accurate way to test that the model weights (specifically, GPT2LMHeadModel) are being updated with each epoch. **Input data:** ``` # Discriminator Train #(pri_sent + nxt_sent), label=[0] #(pri_sent + rdm_sent), label=[1] # Generator input #pri_sent --> gen_sent # Generator Train (via Discriminator Loss) #(pri_sent + gen_sent), label = [0] #(pri_sent + rdm_sent), label = [1] ``` **Training Loop** ``` # Each iteration has a train_discriminator and train_generator phase for phase in ['train_discriminator', 'train_generator']: if phase == 'train_discriminator': # Set discriminator to training mode self.discriminator.train() # Forward propagation d_loss = self.discriminator(tdata['discriminator']['tokens_tensors'], tdata['discriminator']['segments_tensors'], tdata['discriminator']['masked_tensors'], next_sentence_label=tdata['discriminator']['labels']).mean() if self.accumulation_steps > 1: d_loss = d_loss / self.accumulation_steps # Backward propagation d_loss.backward() if (step + 1) % self.accumulation_steps == 0: self.bert_optimizer.step() # Zero the discriminator parameter gradients self.bert_optimizer.zero_grad() else: # Set discriminator to evaluate mode self.discriminator.eval() # Forward propagation g_loss = self.discriminator(tdata['generator']['tokens_tensors'], tdata['generator']['segments_tensors'], tdata['generator']['masked_tensors'], next_sentence_label=tdata['generator']['labels']).mean() if self.accumulation_steps > 1: g_loss = g_loss / self.accumulation_steps # Set generator to train mode self.generator.train() # Backward propagation g_loss.backward() if (step + 1) % self.accumulation_steps == 0: self.gpt2_optimizer.step() # Zero the generator parameter gradients self.gpt2_optimizer.zero_grad() d_epoch_loss += d_loss g_epoch_loss += g_loss # Flush cuda after epoch torch.cuda.empty_cache() d_epoch_loss = float(d_epoch_loss/epoch_batches) g_epoch_loss = float(g_epoch_loss/epoch_batches) g_epoch_loss_list.append(g_epoch_loss) d_epoch_loss_list.append(d_epoch_loss) ```
06-29-2019 13:55:58
06-29-2019 13:55:58
transformers
740
closed
How to get perplexity score of a sentence using anyone of the given Language Models?
I want to find the perlexity score of sentence. I know that we can find the perplexity if we have the loss as perplexity = 2^(entropy loss). Can you tell me how to do it with the models you have listed? It will be of great help.
06-29-2019 11:30:25
06-29-2019 11:30:25
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
739
closed
where is "pytorch_model.bin"?
06-28-2019 15:09:50
06-28-2019 15:09:50
@jufengada Assuming that you've installed pytorch_pretrained_bert package properly. If you load any of the `BERT` models ex: `BertForSequenceClassification` with `.from_pretrained` method with arguments for type of Bert architectures say `bert-base-uncased`; pytorch_model.bin will be downloaded from an s3 bucket to a temporary folder in your environment. Another way is downloading entire set of pre-trained weights in to a folder from the github repository and pointing the `path to pre trained weights` in the call for `.from_pretrained` method would also do the trick. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
738
closed
BertTokenizer never_split issue
Hi, I'm using the BertTokenizer to tokenize a piece of text where I use some entity markers to mark the beginning and end of entities, e.g.: > This was among a batch of paperback [E1] Oxford World [/E1] ' s Classics I've manually added such entity markers to _vocab file_ and the _never_split_ tuple in _BertTokenizer_. My purpose is to retain such markers as a token, not to be split into wordpieces. However, when I test the code from command line in linux terminal, the _never_split_ does not work. Entity markers are split into wordpieces. Here is the printout: ``` 06/28/2019 11:41:20 - INFO - __main__ - Writing example 0 of 1000 ['In', '1983', ',', 'a', 'year', 'after', 'the', 'rally', ',', '[', 'E', '##1', ']', 'For', '##sberg', '[', '/', 'E', '##1', ']', 'received', 'the', 'so', '-', 'called', '`', '`', 'genius', 'award', "'", "'", 'from', 'the', '[', 'E', '##2', ']', 'John', 'D', '.', '[', '/', 'E', '##2', ']', 'and', 'Catherine', 'T', '.', 'MacArthur', 'Foundation', '.'] ``` The strange thing is that, the _never_split_ works pretty fine when I test it in PyCharm, and I get my desired output: ``` 06/28/2019 11:53:01 - INFO - __main__ - Writing example 0 of 1000 06/28/2019 11:53:01 - INFO - __main__ - *** Example *** 06/28/2019 11:53:01 - INFO - __main__ - guid: train-61b3a65fb9b7111c4ca4 06/28/2019 11:53:01 - INFO - __main__ - tokens: [CLS] In 1983 , a year after the rally , [E1] For ##sberg [/E1] received the so - called ` ` genius award ' ' from the [E2] John D . [/E2] and Catherine T . MacArthur Foundation . [SEP] 06/28/2019 11:53:01 - INFO - __main__ - input_ids: 101 1130 2278 117 170 1214 1170 1103 11158 117 20 1370 19945 21 1460 1103 1177 118 1270 169 169 13533 2574 112 112 1121 1103 22 1287 141 119 23 1105 6017 157 119 21045 2974 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` The tests are done on the same machine (my local PC) under same virtual environment. The code, pre-trained bert model and parameters used in Pycharm and terminal are the **same**. Since I need to migrate the code to a server for model training, I really need to resolve this issue. I spent some time debugging but have no idea what could be the cause. Could anyone please provide some hints? Thanks in advance.
06-28-2019 04:05:43
06-28-2019 04:05:43
which version of python do you use in these environments?<|||||>> which version of python do you use in these environments? Hi Thomwolf, I'm using Python 3.6.8 for all these environments. <|||||>In case it's helpful, I create a gist to include some details of this issue: https://gist.github.com/ardellelee/4d80ee7a07166bb6d1a203fdd4d7cc07<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
737
closed
gpt-2 model doesn't output hidden states of all layers.
Using GPT2Model , it seems like it outputs the hidden states of only 1 layer. However, according to code and documentation it is expected to output hidden states features for each layer. Am I making a mistake? Thanks for the advise,
06-28-2019 01:30:28
06-28-2019 01:30:28
Currently not indeed. This option will be in the coming release.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
736
closed
Question regarding crossentropy loss function for BERTMaskedLM
How does BERT handle large number of classes to predict? The number of classes is essentially the vocabulary size which is 30522 for the BERT-base model. When BERT tries to predict a word using CrossEntropy loss, it needs to compute the softmax for a large number of classes. In shallow approach such as word2vec, negative sampling or hierarchical softmax is used. I wonder why it is not the case for BERT. Thanks in advance.
06-27-2019 23:21:13
06-27-2019 23:21:13
30k is ok for a softmax, it's not that much and that because Bert is using a sub-word (open-)vocabulary. Full word (and closed-vocabulary) models like word2vec have to handle several 100k words hence the specific speed-ups. They are also older and the computation power available at the time was more constrained.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
735
closed
BERT encoding layer produces same output for all inputs during evaluation
I am having issues with differences between the output of the BERT layer during training and evaluation time. I am fine-tuning BertForSequenceClassification, but have traced the problem to the pretrained BertModel. During training, the sequence_output within BertModel.forward() produces sensible output, for example : [tensor([[[-0.0474, -0.3332, -0.2803, ..., -0.2278, 0.3694, 0.0433], [ 0.1383, -0.2213, 0.1137, ..., 0.0103, 0.6756, 0.0800], [ 0.0701, -0.4075, -0.4439, ..., 0.1196, 0.5344, 0.1538], ..., [ 0.1345, -0.3650, -0.1050, ..., 0.0817, 0.3069, 0.2953], [ 0.1033, -0.2574, -0.0028, ..., -0.1782, 0.4725, 0.0200], [ 0.3067, -0.3785, -0.0043, ..., -0.1458, 0.6485, -0.0157]], During evaluation time, however, it produces the same output for every input within a batch: tensor([[[-0.2306, -0.4857, -0.8340, ..., -0.9609, 0.4153, -0.3388], [-0.2306, -0.4857, -0.8340, ..., -0.9609, 0.4153, -0.3388], [-0.2306, -0.4857, -0.8340, ..., -0.9609, 0.4153, -0.3388], ..., [-0.2306, -0.4857, -0.8340, ..., -0.9609, 0.4153, -0.3388], [-0.2306, -0.4857, -0.8340, ..., -0.9609, 0.4153, -0.3388], [-0.2306, -0.4857, -0.8340, ..., -0.9609, 0.4153, -0.3388]], My evaluation code is below. `def evaluate(self,dataset): self.state.model.eval() # turn on evaluation mode with torch.no_grad(): for x, y in dataset: # Shape of x is (Batch_size, sequence_length) preds = torch.sigmoid(self.state.model(x, token_type_ids = torch.zeros_like(x),labels = None)).numpy()` # I have tried this line with and without .detach() and it makes no difference Because of the uniform output from BertLayer, I also get identical output within preds.
06-27-2019 17:12:22
06-27-2019 17:12:22
Unlike #695 and others regarding non-determinism, I am calling model.eval() <|||||>Can you share your model initialization code as well?<|||||>My model is just a slight modification of BertForSequenceClassification for multilabel. class BertForMultiLabelSequenceClassification(BertForSequenceClassification): """BERT model for classification. This module is composed of the BERT model with a linear layer on top of the pooled output. """ def forward(self, input_ids, token_type_ids=None, attention_mask=None, labels=None): _, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False) # In evaluation mode, _ and pooled_output are already wrong by this point. pooled_output = self.dropout(pooled_output) logits = self.classifier(pooled_output) if labels is not None: # Supervised training mode loss_fct = BCEWithLogitsLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1, self.num_labels)) return loss else: return logits #Evaluation mode` This is the training loop where I am able to successfully get BERT embeddings. the call to self.evaluate() is the function in my original comment. def train(self): opt = optim.Adam(self.state.model.parameters(), lr=1e-3) max_epochs = 10 nn.init.xavier_normal_(self.state.model.classifier.weight) print("Initial weights",self.state.model.classifier.weight) for epoch in range(1, max_epochs + 1): # main training loop running_loss = 0.0 self.state.model.train() # turn on training mode progress = tqdm.tqdm(total = len(self.state.train)) for x, y in self.state.train: # thanks to our wrapper, we can intuitively iterate over our data segments = torch.zeros_like(x) #print("X: ",x.shape) #print("Y: ",y.shape) opt.zero_grad() loss = self.state.model(x,segments,labels = y) loss.backward() #compute gradients and backpropagate running_loss += loss.item() opt.step() epoch_loss = running_loss / len(self.state.train) batch_size = x.shape[0] progress.update(batch_size) #increment progress bar # calculate the validation loss for this epoch val_loss = 0.0 roc_auc,f1 = self.evaluate(self.state.dev) score = print('Epoch: {}, ROC-AUC Score: {:.4f}, F1 Score: {:.4f}'.format(epoch, roc_auc,f1)) #print(roc_auc) progress.close() <|||||>@josephvalencia I'm nowhere near a pro, but I've been playing with `BERTForSequenceClassification` since a week or so; I want to share my experience. a) I feel that you're not applying the pre-trained weights to your BERT model, I've seen quite a few adaptations of `BertForSequenceClassification` actually implement `BertPreTrainedModel` and apply `pre-trained` weights released from google. I don't quite see that happening in your code, may be you haven't posted it here yet. I also had a similar issue ( `poor accuracy and heavy loss `) because I failed to use pre-trained weights properly. b) You're better of using `BertAdam` as an optimizer along any decent `learning rate scheduler` rather than AdamOptimizer. And you're running this for `10 epochs` !!? Running for 3 epochs on a 64 GB RAM with multi cores itself is taking me about 3 hours to train :D, may be you're using some sort of magic parallelization technique to speed up your training, I could use that info if that's the case. <|||||>@amit8121 Thanks for the tips. I am calling BertForMultilabelSequenceClassification.from_pretrained() elsewhere. I don't actually plan to train for 10 epochs, I will probably implement early stopping once I have the semantics correct.<|||||>Can I see your from pretrained call? The "embeddings" that you're seeing in training stage might just be due to dropout.<|||||>` model = BertForMultiLabelSequenceClassification.from_pretrained('bert-base-uncased', num_labels=num_classes) state = TrainingState(model,test_dataset,dev_dataset,train_dataset) trainer = Trainer(state) trainer.train() `<|||||>Anyone have any ideas? I'm about to give up on this use case<|||||>Hi @josephvalencia, I don't have any hint, unfortunately. If the model works well during training, I can't really understand why it would produce always the same output during evaluation. Do you think you can post a full and self-contained example which exhibit the behavior?<|||||>I have determined that it was an error in my token indexing that happened earlier in my data pipeline / improper use of attention masking<|||||>@josephvalencia What was the solution here? I'm facing the same problem...<|||||>Hi, please open a new issue with a sample from your code and a detailed error log.<|||||>@thomwolf New issue has been opened here - https://github.com/huggingface/transformers/issues/1465<|||||>@thomwolf Is this problem solved<|||||>@thomwolf Is this problem solved<|||||>How the problem is solved? <|||||>i encountered this issue, when i change learning rate value 3e-5 to 5e-5. it worked. when i use learning rate 3e-5, I think it gone local minima. <|||||>Observing same behaviour<|||||>Try experimenting with learning rate and optimizer. Adam with lr=5e-5 worked for me (batch size 64).<|||||>Have the same problem. How is it solved? <|||||>Try with pytorch optimizer not AdamW in transformer.<|||||>Changing my learning rate from 0.01 to 5e-5 worked for me.<|||||>@abdulsalam-bande I had the same output logits for a fine-tuned model, and your solution worked for me (even a slight decrease in the learning rate). Any idea why having a too large learning rate gives this result? @Jayaos did not work (adamw_torch vs adamw_hf). Were you suggesting to use an other optimizer? Specifically, had the same logits for ``` python examples/pytorch/text-classification/run_glue.py --model_name_or_path roberta-large \ --task_name cola --do_train --do_eval \ --max_seq_length 128 --per_device_train_batch_size 8 --learning_rate 5e-5 \ --num_train_epochs 1 --output_dir run_glue_res_lr_5e5 \ ``` but not for ``` python examples/pytorch/text-classification/run_glue.py --model_name_or_path roberta-large \ --task_name cola --do_train --do_eval \ --max_seq_length 128 --per_device_train_batch_size 8 --learning_rate 2e-5 \ --num_train_epochs 1 --output_dir run_glue_res_lr_2e5 ``` Think it may be related to learning rate / batch size ratio.
transformers
734
closed
Erroneous Code
I guess there is a minor mistake in this line. https://github.com/huggingface/pytorch-pretrained-BERT/blob/80684f6f86c13a89fc1e4feac248ef96b013765c/pytorch_pretrained_bert/modeling_transfo_xl.py#L1385 In `TransfoXLLMHeadModel`, the forward computation requires the target (if available) has the shape [batch_size, sequence_length], that is, it is rank-2 here. However, in the `ProjectedAdaptiveLogSoftmax` (case when `config.sample_softmax < 0`, and the default value of `config.sample_softmax` was set to -1), the forward compuation (indicated in the above line) requires the target (if available) has the shape [batch_size * sequence_length], which means it is rank-1 here. This could not passed the assertion check in the forward computation and raise the error. Luckily I have checked the computation logic for the Transformer-XL part, most of them are correct. Therefore I suggest just a minor change from `softmax_output = self.crit(pred_hid.view(-1, pred_hid.size(-1)), target) ` to `softmax_output = self.crit(pred_hid.view(-1, pred_hid.size(-1)), target.reshape(-1)) ` If you prefer PR, I could do this little favor;)
06-27-2019 06:34:24
06-27-2019 06:34:24
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
733
closed
Added option to use multiple workers to create training data
Added a command line argument to allow using a multiprocessing pool to generate training data for all the epochs at once. The shelve object isn't pickleable, so it can't be used with the Pool
06-26-2019 23:19:00
06-26-2019 23:19:00
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=h1) Report > Merging [#733](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/98dc30b21e3df6528d0dd17f0910ffea12bc0f33?src=pr&el=desc) will **increase** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #733 +/- ## ========================================== + Coverage 62.22% 62.27% +0.05% ========================================== Files 18 18 Lines 3979 3979 ========================================== + Hits 2476 2478 +2 + Misses 1503 1501 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <0%> (+1.06%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=footer). Last update [98dc30b...08ff056](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Nice, thanks!
transformers
732
closed
GPT & GPT2: binary classification fails
Using `OpenAIGPTDoubleHeadsModel` for binary classification fails. `CrossEntropyLoss`, requires that the logits dim matches num_classes. If `input_ids.size()` is (batch x 1 x seq_len) (only one copy of the input sequence) but mc_labels are {0, 1} (two classes), the loss fn returns a shape mismatch. The only way it seems to work is using two copies of the input sequence, one for class 0 and a second for class 1. I tweaked the double heads model to use `BCEWithLogitsLoss` for binary: https://github.com/epsdg/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling_openai.py#L955-L960 https://github.com/epsdg/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling_gpt2.py#L943-L948 ...and it works fine, but did I miss an intended use pattern? Thanks for this fantastic port of these models - much more user-friendly than the original code.
06-26-2019 19:00:32
06-26-2019 19:00:32
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
731
closed
merge
06-26-2019 18:38:58
06-26-2019 18:38:58
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=h1) Report > Merging [#731](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/98dc30b21e3df6528d0dd17f0910ffea12bc0f33?src=pr&el=desc) will **increase** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #731 +/- ## ========================================== + Coverage 62.22% 62.27% +0.05% ========================================== Files 18 18 Lines 3979 3979 ========================================== + Hits 2476 2478 +2 + Misses 1503 1501 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <0%> (+1.06%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=footer). Last update [98dc30b...4633033](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
730
closed
bertForNextSentencePrediction
I copied the code from [PyTorch's official site](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_bert/) for `bertForNextSentencePrediction`. I get the next_sent_classif_logits as `tensor([[ 5.2880, -6.0952]])`. How do I get the next sentence from these values?
06-26-2019 12:43:42
06-26-2019 12:43:42
It's a classification task - is the given sentence the next sentence? It's not going to generate the next sentence for you, as BERT is not a classical language model<|||||>So, how do I know from these values whether the next sentence should be classified as the next sentence or not?<|||||>Softmax over it will give you the probabilities - i'm guessing the first is yes next sentence, but you can probably play around with some toy examples to know which dimension is which.<|||||>Thank you for your insight.<|||||>Here is a toy example using BertForNextSentencePrediction ``` import torch import pytorch_pretrained_bert from pytorch_pretrained_bert import BertTokenizer, BertAdam, BertForNextSentencePrediction tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased') # Prepare tokenized input text1 = "what does a technical SEO do?" text2 = "A technical seo optimizes websites blah." # 0=Good / 1 = Bad label = 0 text1_toks = ["[CLS]"] + tokenizer.tokenize(text1) + ["[SEP]"] text2_toks = tokenizer.tokenize(text2) + ["[SEP]"] indexed_tokens = tokenizer.convert_tokens_to_ids(text1_toks + text2_toks) segments_ids = [0]*len(text1_toks) + [1]*len(text2_toks) tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) # Load bertForNextSentencePrediction bert_optimizer = BertAdam(model.parameters(), lr = 0.002, warmup = 0.1, max_grad_norm=-1, weight_decay=-0.0001, t_total = 1 ) print(text1_toks + text2_toks) print(segments_ids) print() # Example Evaluate model.eval() # Predict the next sentence classification logits with torch.no_grad(): prediction = model(tokens_tensor, segments_tensors) softmax = torch.nn.Softmax(dim=1) prediction_sm = softmax(prediction) print ("Good/Bad:", prediction_sm[0].tolist()) # Example Train model.train() loss = model(tokens_tensor, segments_tensors, next_sentence_label=torch.tensor([label])) print("Loss with label {}:".format(label),loss.item()) loss.backward() bert_optimizer.step() ```<|||||>> Here is a toy example using BertForNextSentencePrediction > > ``` > import torch > import pytorch_pretrained_bert > from pytorch_pretrained_bert import BertTokenizer, BertAdam, BertForNextSentencePrediction > > tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') > model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased') > > # Prepare tokenized input > text1 = "what does a technical SEO do?" > text2 = "A technical seo optimizes websites blah." > # 0=Good / 1 = Bad > label = 0 > > text1_toks = ["[CLS]"] + tokenizer.tokenize(text1) + ["[SEP]"] > text2_toks = tokenizer.tokenize(text2) + ["[SEP]"] > > indexed_tokens = tokenizer.convert_tokens_to_ids(text1_toks + text2_toks) > segments_ids = [0]*len(text1_toks) + [1]*len(text2_toks) > > tokens_tensor = torch.tensor([indexed_tokens]) > segments_tensors = torch.tensor([segments_ids]) > > # Load bertForNextSentencePrediction > bert_optimizer = BertAdam(model.parameters(), > lr = 0.002, > warmup = 0.1, > max_grad_norm=-1, > weight_decay=-0.0001, > t_total = 1 > ) > > print(text1_toks + text2_toks) > print(segments_ids) > print() > > > # Example Evaluate > model.eval() > # Predict the next sentence classification logits > with torch.no_grad(): > prediction = model(tokens_tensor, segments_tensors) > > softmax = torch.nn.Softmax(dim=1) > prediction_sm = softmax(prediction) > print ("Good/Bad:", prediction_sm[0].tolist()) > > # Example Train > model.train() > loss = model(tokens_tensor, segments_tensors, next_sentence_label=torch.tensor([label])) > print("Loss with label {}:".format(label),loss.item()) > loss.backward() > bert_optimizer.step() > ``` Thanks for this, any idea how do this in batches? How we are supposed to pad different input lenghts?
transformers
729
closed
Grover generator support
Grover released their trained model: https://github.com/rowanz/grover I think it should be similar to GPT-2 large. Any plans to support it?
06-26-2019 08:49:43
06-26-2019 08:49:43
Maybe if they open source a model larger than the current GPT-2 large. I'm also happy to welcome PRs to port additional models (as long as they are provided with test/doc/example)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
728
closed
UnicodeDecodeError:
Traceback (most recent call last): File "run_classifier_br.py", line 1061, in <module> main() File "run_classifier_br.py", line 772, in main tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case) File "/home/luwei/anaconda3/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 197, in from_pretrained tokenizer = cls(resolved_vocab_file, *inputs, **kwargs) File "/home/luwei/anaconda3/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 97, in __init__ self.vocab = load_vocab(vocab_file) File "/home/luwei/anaconda3/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 56, in load_vocab token = reader.readline() File "/home/luwei/anaconda3/lib/python3.6/codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
06-26-2019 08:39:21
06-26-2019 08:39:21
@ZhaoxinRuc I'm assuming that some data in your `vocab.txt` file contains bad characters which your `codecs.py` can't decode properly. When I downloaded the pre-trained weights folder, it came with `vocab.txt` which didn't had this issue. Check if you've downloaded a wrong version or some how changed contents of `vocab.txt`. It's also likely that your operating system actually messed with the encoding of certain characters in your `vocab.txt` file. For that, try saving the text file in 'utf8' format , just to make sure. Hope it helps.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
727
closed
Poor Training and evaluation accuracy even with low loss
@spolu @cynthia @thomwolf @davidefiocco I initially used command line arguments to run the `run_classifier.py`, using `cola` as a task, for `Sequence Classification` on a custom data set, I was able to execute and get the results , but they were very poor: an evaluation accuracy of 0.0 and loss of close to .9. Then I decided to write a small wrapper class similar to `BertForSequenceClassificaiton` in `modeling.py`, to invoke a basic `BERT Model` from set of pre-trained classes using `BERT weights`: ex: `model = BERT_MULTILABEL_SEQ_Classify.from_pretrained('bert-base-uncased', num_labels = 8)` - > BERT_MULTILABEL_SEQ_Classify is my wrapper class up on executing this line of code I see the following log information: ``` 06/25/2019 20:40:12 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /home/dbi/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 06/25/2019 20:40:12 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /home/dbi/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmp8hjgcb4r 06/25/2019 20:40:17 - INFO - pytorch_pretrained_bert.modeling - Model config { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 30522 } 06/25/2019 20:40:34 - INFO - pytorch_pretrained_bert.modeling - Weights of BERT_MULTILABEL_SEQ_Clasfify not initialized from pretrained model: ['classifier.weight', 'classifier.bias'] 06/25/2019 20:40:34 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BERT_MULTILABEL_SEQ_Clasfify: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] ``` I ignored this log information and proceeded with training my data, the results were similar to my initial attempts ( which were really poor ). Then I realized that, because this model is not using any of the pre-trained weights; it's performing poorly on my classification task. I do have the `uncased_L-12_H-768_A-12` file in the same path as this program is running. It's contents look something like this ![bertpls](https://user-images.githubusercontent.com/23751321/60150916-804a9600-978f-11e9-804f-a131dad0a4d3.png) So, my question's how do I properly invoke the `BERTModel ` so that it's pre-trained weights are also loaded along with it ? I understand that I might have either messed up with signature of the BERT Model or may need to point the bert model to look for weights, but not sure how. Every time I use the `from_pretrained` method to load pre-trained weights, I can see some files being downloaded from s3 buckets but looks like those files do not contain pre-trained weights. It might also be the case that the conversion from `pytorch.bin` (downloaded from s3 buckets) to checkpoint files is not working as expected in `ubuntu 18.04`. Any help is much appreciated. TIA Note: I thought including code for my entire wrapper class would be irrelevant to this question, so I didn't do so. I have: ` Python 3.6.8 Pytorch 1.1.0 `
06-26-2019 03:57:51
06-26-2019 03:57:51
@amit8121 can you just sent the run_classifier.py python command that you are using in terminal to run this model<|||||>@himanshututeja1998 Thanks for the response. This is a similar command to what I used for `run_classifier.py` ``` export BERT_BASE_DIR=./path/to/uncasedweightsfolder/ python bert/run_classifier.py \ --task_name=cola \ --do_train=true \ --do_eval=true \ --data_dir=./data \ --vocab_file=$BERT_BASE_DIR/vocab.txt \ --bert_config_file=$BERT_BASE_DIR/bert_config.json \ --init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \ --max_seq_length=128 \ --train_batch_size=32 \ --learning_rate=2e-5 \ --num_train_epochs=3.0 \ --output_dir=./bert_output/ ``` BTW, I also had to change the `get_labels` function of this code to support multi label classification ( 8 in my case). I believe the issue here has more to do with the kind of weights file I downloaded as a part of my uncased folder. From my knowledge, I believe task of `cola` might not be too suitable for multi label classification. And our team felt this command line methodology is not ideal for production grade scenario, so we decided to develop a `BERT wrapper` and train it.<|||||>@amit8121 Use this :::::::: python/python3 run_classifier.py --task_name cola --do_eval --do_lower_case --data_dir DATA_DIR PATH/ --bert_model PATH TO PRETRAINNED WEIGHT (uncased_L-12_H-768_A-12)/ --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir MODEL_OUTPUT<|||||>@himanshututeja1998 I appreciate your willingness to help, but even if the above command runs and trains successfully, of which I have some reservations on, it still doesn't entirely solve our problem i.e., we ultimately want to make production grade code using the pre-trained weights rather than using command line tools. Thanks for the response.<|||||>> @amit8121 Use this :::::::: > python/python3 run_classifier.py --task_name cola --do_eval --do_lower_case --data_dir DATA_DIR PATH/ --bert_model PATH TO PRETRAINNED WEIGHT (uncased_L-12_H-768_A-12)/ --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir MODEL_OUTPUT Still getting the same results: ``` INFO:tensorflow:***** Eval results ***** INFO:tensorflow: eval_accuracy = 0.0 INFO:tensorflow: eval_loss = 3.9577928 INFO:tensorflow: global_step = 25 INFO:tensorflow: loss = 3.9577928 ``` for the command: ` python3 bert/run_classifier.py --task_name=cola --do_train=true --do_eval=true --data_dir=./data --vocab_file=$BERT_BASE_DIR/vocab.txt --bert_model=$BERT_BASE_DIR/pytorch_model.bin --bert_config_file=$BERT_BASE_DIR/bert_config.json --max_seq_length=128 --train_batch_size=32 --learning_rate=2e-5 --num_train_epochs=3.0 --output_dir=./bert_output/` just an FYI.<|||||>@spolu @cynthia @thomwolf I think I might have figured some parts of the issue out. When I load bert model using `BERT_CLASS.from_pretrained('bert-base-uncased')` only the `pytorch.bin` file gets downloaded form S3 bucket in to a temp folder under system's /tmp folder. Interestingly, `BertForSequenceClassification` class couldn't load pre-trained weights form this file. But, when I copied the `pytorch.bin` file to folder `uncased_L-12_H-768_A-12`, with all the above mentioned files in tact and changed the path to `from_pretrained` file to point to uncased folder; I was able to load the model with pre-trained weights. My results are much better than previous rounds (I only ran it for 4 epochs though): ``` For each batch in an epoch: Training - ~ 70 % accuracy , with loss ~0.10; Evaluation - About ~ 10% accuracy, with loss ~ 0.35 ``` I did use `BertAdam` along with decent learning rate scheduling. But the results are a bit under whelming and there's a huge difference in training and evaluation accuracies, does it have anything to do with the size of the data ? I only have ~ 350 rows of training data and ~ 100 rows of testing data.<|||||>@amit8121 on which dataset you are training this because this can also due to mismatched task name "cola / mnli " etc. <|||||>@himanshututeja1998 Underlying data set shouldn't matter as long as you've transformed it in a format required for `Sequence Classification`, I made sure that I `pre-processed` my data as per `BertForSeqeunceClassification` requirements ; my initial issue was not being able to use pre-trained weights which I figured out and results were much better, yet not close to SOTA. My training and evaluation losses go as low as 0.06 on average for each epoch, yet accuracy hovers around 65 - 70 % which is strange. This issue seems to be quite common for both GPT and BERT models, there are multiple `issues` on these topics .I believe the issue is with the way we're trying to transform our target variables. Hope the authors would find time to respond.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
726
closed
Examples does not work with apex optimizers
Under fp16 option, optimizer is replaced by one from apex, which does not have the attribute ```get_lr()``` . https://github.com/huggingface/pytorch-pretrained-BERT/blob/98dc30b21e3df6528d0dd17f0910ffea12bc0f33/examples/run_squad.py#L315-L317 Should be able to reproduce the error by running the example [here](https://github.com/huggingface/pytorch-pretrained-BERT#fine-tuning-bert-large-on-gpus) in the README, error message should be something like ```AttributeError, FP16_Optimizer does not have attribute get_lr()```
06-26-2019 02:12:21
06-26-2019 02:12:21
You can use # on line 316 and it can solved this problem.
transformers
725
closed
BERT Input size reduced to half in forward function
I was trying to modify your BertForSequenceClassification class for long sequence classification. Like below: ``` class MyBertForSequenceClassification(BertPreTrainedModel): def __init__(self, config, num_labels=2, output_attentions=False): super(MyBertForSequenceClassification, self).__init__(config) self.output_attentions = output_attentions self.num_labels = num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size*20, num_labels) self.softmax = nn.Softmax() self.apply(self.init_bert_weights) def forward(self, input_ids, token_type_ids=None, attention_mask=None, labels=None, head_mask=None): print(input_ids.shape) # half of actual passed size outputs = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False) _, pooled_output = outputs pooled_output = self.dropout(pooled_output) flat_pooled_output = pooled_output.view(-1) print(flat_pooled_output.shape) logits = self.classifier(flat_pooled_output) logits = self.softmax(logits) return logits ``` I found that when I passed the input_ids tensor with dimensions (40, 128) into the model, the actual input_ids I got in the forward function was (20, 128). It always reduce my input to half of original size.
06-25-2019 16:52:21
06-25-2019 16:52:21
Maybe you have 2 GPUs?<|||||>@thomwolf Thanks a lot. I forgot I was running on two gpus.
transformers
724
closed
fixing bugs in load_rocstories_dataset in run_openai_gpt.py
The csv reader requires a delimiter argument to read .tsv file in the given example dataset. I've also added link for the dataset and provided a sample eval results in comments. Also, the eval dataset needs to be different from the training dataset, which I've also fixed in the given command to run this script.
06-25-2019 14:31:38
06-25-2019 14:31:38
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=h1) Report > Merging [#724](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/98dc30b21e3df6528d0dd17f0910ffea12bc0f33?src=pr&el=desc) will **increase** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #724 +/- ## ========================================== + Coverage 62.22% 62.27% +0.05% ========================================== Files 18 18 Lines 3979 3979 ========================================== + Hits 2476 2478 +2 + Misses 1503 1501 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <0%> (+1.06%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=footer). Last update [98dc30b...63da86c](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi, what is the `run_openai_gpt2_custom.py` file for?<|||||>Hi Thomas, I customized the run_openai_gpt.py file to add support for gpt-2. That's why the name might be a bit confusing (run_openai_gpt2_custom.py). I'll better create a separate PR for this file, with a better name.Any suggestion?
transformers
723
closed
Update Adam optimizer to follow pytorch convention for betas parameter (#510)
see #510 Update optimiser to follow pytorch convention ([Adam Optimiser](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam)) instead of tensorflow, to allow for better integration with other pytorch libraries and frameworks.
06-25-2019 08:29:30
06-25-2019 08:29:30
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=h1) Report > Merging [#723](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/98dc30b21e3df6528d0dd17f0910ffea12bc0f33?src=pr&el=desc) will **not change** coverage. > The diff coverage is `83.33%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #723 +/- ## ======================================= Coverage 62.22% 62.22% ======================================= Files 18 18 Lines 3979 3979 ======================================= Hits 2476 2476 Misses 1503 1503 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/optimization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvb3B0aW1pemF0aW9uLnB5) | `74.26% <100%> (ø)` | :arrow_up: | | [pytorch\_pretrained\_bert/optimization\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvb3B0aW1pemF0aW9uX29wZW5haS5weQ==) | `34.84% <66.66%> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=footer). Last update [98dc30b...c988590](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ok, can you update the examples that use these optimizers as well?<|||||>> Ok, can you update the examples that use these optimizers as well? I had a look at the examples, all seem to use the default values for b1/b2 so there shouldn't be any change required.<|||||>Perfect!
transformers
722
closed
low accuracy when fine tuning for the MRPC task with large model
I noticed that on the website you said:"Here is an example using distributed training on 8 V100 GPUs and Bert Whole Word Masking model to reach a F1 > 92 on MRPC." However, when I fine tuned the model with max_sequence_length=128 and batch_size=12 on a single 11G GPU, it gives the accuracy of 0.68. acc = 0.6838235294117647 acc_and_f1 = 0.7480253018237863 eval_loss = 0.6240295206799227 f1 = 0.8122270742358079 global_step = 918 loss = None I wonder what have let to that. the command I used "python run_classifier.py --bert_model bert-large-uncased-whole-word-masking --task_name MRPC --do_train --do_eval --do_lower_case --data_dir E:\Users\..... --max_seq_length 64 --train_batch_size 12 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir E:\Users\...."
06-24-2019 23:48:54
06-24-2019 23:48:54
The batch size will be 8 times smaller with only one GPU, increase it by a factor of 8 using gradient accumulation, e.g. `--train_batch_size 96 --gradient_accumulation_steps 8`<|||||>Thank you for your help. However, when I use this command: python run_classifier.py --bert_model bert-large-uncased-whole-word-masking --task_name MRPC –-do_train --do_eval --do_lower_case --data_dir E:\Users\...\MRPC --max_seq_length 128 --train_batch_size 48 --gradient_accumulation_steps 8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir E:\Users\...\MRPC_result it still gives me a too low acc result. btw I am not using the latest version of package, could that be the cause? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
721
closed
Usual loss when pretraining?
We are pretraining on our own corpus using the `pregenerate_training_data.py` and `finetune_on_pregenerated.py` scripts. The input text to the first script follows the same format as `sample_text.txt` on the samples folder, and contains about 515000 lines of text. We run `finetune_on_pregenerated.py` for 60 epochs with the default learning rate (3e-5), 128 sequence length, 300 batch size using 8gpus. The loss got to 1.3. We used this model for a classification task and we didn't see any difference than using the original pretrained BERT. We also compared the weights of some attention layers and they are very similar. Do you have an estimate as to what the loss should be in order to see improvements? We are also aware we should use 512 for some of the pretraining process because most of our input sequences are 512 long, but still we were expecting some kind of change. Also, do you think it might be a problem with the small size of our corpus?
06-24-2019 15:13:35
06-24-2019 15:13:35
@PedroUria We're in a similar boat as you are. In our case, the problem was with accuracy though. We used `BertForSequenceClassification` on a multi-label classification task. We've actually written a similar version of `BertForSequenceClassification` as written in `models.py` of this repository changing `CrossEntropyLoss` with `BCELogitLoss`. Initially we faced some issues with loading pre-trained weights, but once we did sucessfully and started training ( 8 label target ), even though our loss started off very small `0.24` in first step of an epoch and went down to `0.023` we never saw accuracy on training set more than `0.71` and validation `.81` in a specific batch. In comparision we have much smaller data set than yours , actually not even close : `350 rows training` and `120 rows` evaluation. So, it's likely that the poor results we're experiencing is because of the minimal data size. We ran for only 4 epochs though, we also used `Cyclic_LR` for scheduling with `BertAdam` optimizer with same learning rate as yours. I did create an issue looking for answers. Hope they'll respond. <|||||>Are you finetuning on the data afterwards? If yes then I believe it is due to the very small dataset. Try to finetune on a larger dataset on a similiar task first and then finetune on your small dataset.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
720
closed
Import Error: cannot import name 'warmup_linear'
I get the following error: ``` File "/Users/gregory/PROJECTS/MyML/MLClassification/TrainAndTest/Models/controller.py", line 11, in <module> from Models.bert import BertModel File "/Users/gregory/PROJECTS/MyML/MLClassification/TrainAndTest/Models/bert.py", line 9, in <module> from pytorch_pretrained_bert.optimization import BertAdam, warmup_linear ImportError: cannot import name 'warmup_linear' ``` This problem was already noted in comments to the following issue: [https://github.com/huggingface/pytorch-pretrained-BERT/issues/499](https://github.com/huggingface/pytorch-pretrained-BERT/issues/499) The response said that it was fixed with PR #506, and suggested to clone this Git repository and install the package from it. However @goyalsaransh97 already mentioned that the problem persists for him (or her). So it does for me too. BTW, please notice, the code works fine on machine where all packages were installed about an year ago.
06-24-2019 14:00:40
06-24-2019 14:00:40
I probably was wrong and the issue was supposed to be fixed with PR #518... Anyway, the latest was merged a while ago and did not help. <|||||>You have to wait for the next release or use the master branch<|||||>@thomwolf , > You have to wait for the next release or use the master branch I cloned the git repo directly from here: `git clone https://github.com/huggingface/pytorch-pretrained-BERT.git` I think it takes master branch by default, isn't it? <|||||>Explicit "git checkout master" changes nothing.<|||||>The latest master branch gives the same issue. Version 0.5.1 works as of now without this issue<|||||>The version 0.4.0 doesn't give this issue. pip install pytorch_pretrained_bert==0.4.0<|||||>> The version 0.4.0 doesn't give this issue. > pip install pytorch_pretrained_bert==0.4.0 Downgrading to 0.4.0 solved my problem.<|||||>This issue has popped up again in 0.6.2 for me. Downgrading to 0.6.1 solved it ```bash pip install pytorch-pretrained-bert==0.6.1 ```
transformers
719
closed
Embedding and predictions in one forward pass
Is it possible to mix `BertModel` and `BertForMaskedLM`? i.e. is it possible to get the embedding and the predictions in one forward pass?
06-24-2019 07:31:48
06-24-2019 07:31:48
Yes, just make your own PyTorch model taking inspiration from BertModel and BertForMaskedLM. If you sub-class `BertPreTrainedModel`, you'll be able to load the pretrained weights using the `from_pretrained()` method<|||||>Okay, thank you :)
transformers
718
closed
Incorrect docstring for BertForMaskedLM
The docstring for the head_mask argument to the BertForMaskedLM class is repeated and one is incorrect - I presume it's just a copy-paste mistake.
06-23-2019 17:49:03
06-23-2019 17:49:03
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=h1) Report > Merging [#718](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/98dc30b21e3df6528d0dd17f0910ffea12bc0f33?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #718 +/- ## ======================================= Coverage 62.22% 62.22% ======================================= Files 18 18 Lines 3979 3979 ======================================= Hits 2476 2476 Misses 1503 1503 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `79.49% <ø> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=footer). Last update [98dc30b...8d6a118](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks, we'll make a big clean up of the docstrings for the coming release.
transformers
717
closed
BPE vocab
Do you guys have the functionality to support BPE with the models?
06-22-2019 23:38:38
06-22-2019 23:38:38
Not speaking for the core developers, but `pytorch-pretrained-BERT` supports it, because: * GPT-1 use BPE, see code [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization_openai.py#L73) * GPT-2 use BPE on byte level, see code [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization_gpt2.py#L88) BERT uses a variant of BPE (word pieces) and the pretrained language model for Transformer-XL was trained on WikiText-103 (so it is a word-based model). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
716
closed
Add tie_weights to XLNetForSequenceClassification
XLNetForSequenceClassification doesn't have tie_weights() but initialization will call it, or we can made a function in XLNetPretrainedModel?
06-22-2019 22:15:23
06-22-2019 22:15:23
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=h1) Report > Merging [#716](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=desc) into [xlnet](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c946bb51a61f67b0c9eaae1c9cf6f164a7748e37?src=pr&el=desc) will **increase** coverage by `0.03%`. > The diff coverage is `50%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## xlnet #716 +/- ## ========================================== + Coverage 62.18% 62.22% +0.03% ========================================== Files 22 22 Lines 4742 4744 +2 ========================================== + Hits 2949 2952 +3 + Misses 1793 1792 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/modeling\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfeGxuZXQucHk=) | `65.16% <50%> (-0.06%)` | :arrow_down: | | [pytorch\_pretrained\_bert/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <0%> (+1.06%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=footer). Last update [c946bb5...00547bd](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks but we don't need that, the model was work in progress.
transformers
715
closed
Include a reference for LM finetuning
@lopuhin recently made me aware of a published paper covering domain fine-tuning of BERT models, so I added a reference to the LM finetuning README.
06-22-2019 14:08:02
06-22-2019 14:08:02
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715?src=pr&el=h1) Report > Merging [#715](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c304593d8fa93f25febe1458c63497a846749c89?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #715 +/- ## ======================================= Coverage 62.27% 62.27% ======================================= Files 18 18 Lines 3979 3979 ======================================= Hits 2478 2478 Misses 1501 1501 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715?src=pr&el=footer). Last update [c304593...c7b2808](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Nice!
transformers
714
closed
Correct a broken link on README
I've correct a broken link and its contexts on README.
06-22-2019 11:36:35
06-22-2019 11:36:35
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714?src=pr&el=h1) Report > Merging [#714](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c304593d8fa93f25febe1458c63497a846749c89?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #714 +/- ## ======================================= Coverage 62.27% 62.27% ======================================= Files 18 18 Lines 3979 3979 ======================================= Hits 2478 2478 Misses 1501 1501 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714?src=pr&el=footer). Last update [c304593...ada0d8f](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>👍
transformers
713
closed
TypeError: expand_as() takes 1 positional argument but 5 were given
[Model.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py) line [870](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L870): `head_mask = head_mask.expand_as(self.config.num_hidden_layers, -1, -1, -1, -1)` got the mistake, I tried head_mask=torch.tensor([1, 2, 3]) or just like that.
06-22-2019 08:41:11
06-22-2019 08:41:11
Oh yes, this will be fixed in the coming PR #711. Head mask is an option to explore the model internals, it's not for production. See the `bertology.py` example script.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
712
closed
BERT Tokenizer not working! Failed to load the bert-base-uncased model.
The sentence that is being tokenized is: "Weather: Summer’s Finally Here. So Where Is It?" But it gives the following error: Error message: AttributeError Traceback (most recent call last) <ipython-input-78-c51eef61e2b9> in <module> ----> 1 correct_pairs = convert_sentence_pair(df_full.title.tolist(), df_full.desc.tolist(), max_seq_length=200, tokenizer=tokenizer) 2 3 <ipython-input-76-da322eec2f23> in convert_sentence_pair(titles, descs, max_seq_length, tokenizer) 3 for (ex_index, (title, desc)) in enumerate(zip(titles, descs)): 4 print(title) ----> 5 tokens_a = tokenizer.tokenize(title) 6 7 tokens_b = None AttributeError: 'NoneType' object has no attribute 'tokenize' When I tried to load the module manually I got the following issue: tokenizer = BertTokenizer.from_pretrained( ... "bert-base-uncased", do_lower_case=True, ... cache_dir=PYTORCH_PRETRAINED_BERT_CACHE) Model name 'bert-base-uncased' was not found in model name list (bert-base-cased, bert-large-uncased, bert-large-cased, bert-base-multilingual-cased, bert-base-chinese, bert-base-uncased, bert-base-multilingual-uncased). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt' was a path or url but couldn't find any file associated to this path or url. Can anyone please help?
06-21-2019 19:41:48
06-21-2019 19:41:48
Do you have a good internet connection? The error messages will be improved in the coming release but usually, this comes from the library not being able to reach AWS S3 servers to download the pretrained weights.<|||||>@thomwolf Thank you so much for your quick response! I followed your advice to people on other posts where they can't load the model. What I did then is to try to download and test the model in the command line. So I tried the following and it worked. What I couldn't understand is the fact that why I have to manually import BERT packages in a python shell when I already installed it using pip3? Below is what I tried and it worked. >>> from pytorch_pretrained_bert.modeling import BertForNextSentencePrediction KeyboardInterrupt >>> model = BertForNextSentencePrediction.from_pretrained( ... "bert-base-uncased" ... ).to(device) 100%|████████████████████████████████████████████| 407873900/407873900 [00:08<00:00, 48525133.57B/s] Traceback (most recent call last): File "<stdin>", line 3, in <module> NameError: name 'device' is not defined ############################################################ I fixed the device thing and below is the proper output. >>> from pytorch_pretrained_bert.modeling import BertForNextSentencePrediction >>> model = BertForNextSentencePrediction.from_pretrained( ... "bert-base-uncased" ... ).to(device) <|||||>I solved the problem by removing 'cache_dir=PYTORCH_PRETRAINED_BERT_CACHE'. The function is trying to find the downloaded model in your cache_dir, but if you haven't downloaded anything. then you should remove this argument.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>If you are using Kaggle then make sure that the internet toggle button is switched on the right-hand side.
transformers
711
closed
PyTorch-Transformers 1.0 - w. XLNet and XLM model - Standard API - Torchscript compatibility
Current status: - [x] model with commented code and pretrained loading logic - [x] tokenizer - [x] tests for model and tokenizer - [x] checking standard deviation of hidden states with TF model is ok (max dev btw 1e-4 & 1e-5 until last layer, last layer 1e-3, higher than bert but should be ok, investigated this in details, comes from the conjunction of layer_norm and slight differences in internal PT vs. TF ops. Add some graphs to readme) - [x] converting and uploading model to S3 Model/tokenizer are usable, now just need to - [x] check the model behave well under various conditions and in a few corner cases - [ ] add `XLNetForQuestionAnswering` classes variants - [x] add `XLNetForSequenceClassification` classes variants - [ ] add a finetuning example with results close to TF - [ ] add models in README - [ ] add models on torch.hub
06-21-2019 16:40:18
06-21-2019 16:40:18
@thomwolf I could get the `XLNetLMHeadModel` running, but I have some issues with the "normal" `XLNetModel` implementation: ```python import torch from pytorch_pretrained_bert import XLNetTokenizer, XLNetModel import logging logging.basicConfig(level=logging.INFO) tokenizer = XLNetTokenizer.from_pretrained("xlnet-large-cased") text = "Who was Jim Henson ? Jim Henson was a puppeteer" tokenized_text = tokenizer.encode(text) indexed_tokens = torch.tensor([tokenized_text]) model = XLNetModel.from_pretrained("xlnet-large-cased") model.eval() with torch.no_grad(): hidden_states, mems = model(indexed_tokens) print(hidden_states) ``` -> is currently not working, `hidden_states = model(indexed_tokens)` always returns `nan`s for the embeddings. Thanks :heart: <|||||>Indeed `from_pretrained()` was not loading weights in `XLNetModel` (did you see that in the logs?). Should be fixed now. This is all very WIP so beware @stefan-it!<|||||># [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=h1) Report > Merging [#711](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c304593d8fa93f25febe1458c63497a846749c89?src=pr&el=desc) will **increase** coverage by `16.62%`. > The diff coverage is `81.32%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #711 +/- ## =========================================== + Coverage 62.27% 78.89% +16.62% =========================================== Files 18 34 +16 Lines 3979 6180 +2201 =========================================== + Hits 2478 4876 +2398 + Misses 1501 1304 -197 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/tests/modeling\_openai\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfb3BlbmFpX3Rlc3QucHk=) | `84.21% <ø> (ø)` | | | [pytorch\_transformers/tests/conftest.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvY29uZnRlc3QucHk=) | `90% <ø> (ø)` | | | [pytorch\_transformers/tests/modeling\_xlnet\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfeGxuZXRfdGVzdC5weQ==) | `95.89% <ø> (ø)` | | | [pytorch\_transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX29wZW5haS5weQ==) | `82.2% <ø> (ø)` | | | [pytorch\_transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `96.66% <ø> (ø)` | | | [pytorch\_transformers/tests/modeling\_gpt2\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfZ3B0Ml90ZXN0LnB5) | `84.21% <ø> (ø)` | | | [pytorch\_transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3hsbS5weQ==) | `82.05% <ø> (ø)` | | | [...torch\_transformers/tests/tokenization\_gpt2\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX2dwdDJfdGVzdC5weQ==) | `96.87% <ø> (ø)` | | | [...h\_transformers/tests/tokenization\_tests\_commons.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3Rlc3RzX2NvbW1vbnMucHk=) | `100% <ø> (ø)` | | | [...ytorch\_transformers/tests/tokenization\_xlm\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3hsbV90ZXN0LnB5) | `96.77% <ø> (ø)` | | | ... and [55 more](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=footer). Last update [c304593...8ad7e5b](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for the fix - it's working now :) Sorry, if that's not the right place here: I've implemented a new embedding class in `flair`. But when the complete model is going to be saved (via `torch.save`), the following error message is shown: ```bash Traceback (most recent call last): File "train_xlnet.py", line 36, in <module> max_epochs=500) File "/mnt/flair/flair/trainers/trainer.py", line 341, in train self.model.save(base_path / "best-model.pt", pickle_module=self.pickle_module) File "/mnt/flair/flair/nn.py", line 86, in save self.save_torch_model(model_state, str(model_file), pickle_module) File "/mnt/flair/flair/nn.py", line 76, in save_torch_model torch.save(model_state, str(model_file), pickle_protocol=pickle_protocol) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 224, in save return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol)) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 149, in _with_file_like return body(f) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 224, in <lambda> return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol)) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 297, in _save pickler.dump(obj) TypeError: can't pickle SwigPyObject objects ``` This error message does not appear, when e.g. using the GPT1 embeddings. Do you have any hint, what's going wrong here 🤔 <|||||>Am I correct that model.eval() does not work with this yet? It seems all predictions generated are the same. This is how im doing it: ``` config = XLNetConfig('config.json') model = XLNetForSequenceClassification(config, num_labels=3) model.load_state_dict(torch.load("xlnet_pytorch.bin")) model.to(device) for param in model.parameters(): param.requires_grad = False model.eval() ``` Or am i doing something wrong?<|||||>@thomwolf I think this is branch still WIP yes? Because the `run_xlnet_squad.py` still refers to BertForQuestionAnswering. SHould I just change that to XLnetForQuestionAnswering and it should work? or maybe I should wait more?<|||||>Yeah, I'm still on it, finishing the tests of XLNetForSequenceClassification so you should wait more (or help me code the XLNetForQuestionAnswering, haha). I'll make the description of the PR more explicit that only the base model is up now and `XLNetForSequenceClassification` and `XLNetForQuestionAnswering` are not ready yet.<|||||>I think I found the root cause of the serialization problem (described in one of the previous comments here): The sentencepiece processor object cannot be correctly serialized. I found a similar issue in the xnmt library: https://github.com/neulab/xnmt/pull/351<|||||>> Yeah, I'm still on it, finishing the tests of XLNetForSequenceClassification so you should wait more (or help me code the XLNetForQuestionAnswering, haha). I've started to work on it with whatever time I've :( So far I'm getting an error in the `modeling_xlnet.py` in line 752: ` attention_mask = attention_mask.transpose(0, 1).contiguous() if attention_mask is not None else None` Whoever finishes first should let here know..But appreciate your work here @thomwolf.<|||||>@stefan-it do you know if there is a workaround to the sentencepiece serialization issue?<|||||>@thomwolf I think this can be fixed with: ```python def __getstate__(self): state = self.__dict__.copy() state["sp_model"] = None return state def __setstate__(self, d): self.__dict__ = d try: import sentencepiece as spm except ImportError: logger.warning("You need to install SentencePiece to use XLNetTokenizer: https://github.com/google/sentencepiece" "pip install sentencepiece") self.sp_model = spm.SentencePieceProcessor() self.sp_model.Load(self.vocab_file) ``` In the `XLNetTokenizer` class. A nice test case would be: ```python import pickle from pytorch_pretrained_bert import XLNetTokenizer tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased') text = "Munich and Berlin are nice cities" filename = "tokenizer.bin" subwords = tokenizer.tokenize(text) pickle.dump(tokenizer, open(filename, "wb")) tokenizer_new = pickle.load(open(filename, "rb")) subwords_loaded = tokenizer_new.tokenize(text) assert subwords == subwords_loaded ```<|||||>I'm hitting a bug where the head mask created here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/xlnet/pytorch_pretrained_bert/modeling_xlnet.py#L854 is a list of None values instead of just None, which eventually results in an error: ``` File "../pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling_xlnet.py", line 397, in rel_attn_core attn_prob = attn_prob * head_mask TypeError: mul(): argument 'other' (position 1) must be Tensor, not list ``` I think the fix is to either make the mask a single None value, or to index it with the layer number when it's used. ---- I also ran into one more error: ``` File "../pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling_xlnet.py", line 555, in forward outputs = [output_h, output_g] + outputs[2:] # Add again attentions if there are there TypeError: can only concatenate list (not "tuple") to list ``` Not sure what's causing this, but casting `outputs[2:]` to a list seems to fix things.<|||||>@stefan-it Great, thanks!<|||||>@nikitakit Yeah I'm refactoring the API among models to make it more consistent/simpler to switch among models. Will be finished soon.<|||||>While modifying `run_xlnet_squad.py` I looked through the `run_squad.py` of the xlnet repo. It seems he made a bunch of changes to `convert_examples_to_features` (adding lcs etc.) Also, I didn't see that he put the cls at the end; rather p before than q (I think; hope I'm not wrong). Have you started to work on it @thomwolf ? It's interesting :) <|||||>One quick update in case someone else is also working on the Squad XLnet fine tuning. I got this error and working on it seems that the end logits in the QA layer is buggy. `Traceback (most recent call last): File "/dccstor/avistor4/squad/expts/xlnet_branch_hf/pytorch-pretrained-BERT/examples/run_squad.py", line 478, in <module> main() File "/dccstor/avistor4/squad/expts/xlnet_branch_hf/pytorch-pretrained-BERT/examples/run_squad.py", line 470, in main result = evaluate(args, model, tokenizer, prefix=global_step) File "/dccstor/avistor4/squad/expts/xlnet_branch_hf/pytorch-pretrained-BERT/examples/run_squad.py", line 219, in evaluate args.null_score_diff_threshold) File "/dccstor/avistor4/squad/expts/xlnet_branch_hf/pytorch-pretrained-BERT/examples/utils_squad.py", line 463, in write_predictions feature_null_score = result.start_logits[0] + result.end_logits[0] TypeError: unsupported operand type(s) for +: 'float' and `'list'` <|||||>Is there any rough estimation as to when "a finetuning example with results close to TF" will be available?<|||||>> > > Thanks for the fix - it's working now :) > > Sorry, if that's not the right place here: > I've implemented a new embedding class in `flair`. But when the complete model is going to be saved (via `torch.save`), the following error message is shown: > > ```shell > Traceback (most recent call last): > File "train_xlnet.py", line 36, in <module> > max_epochs=500) > File "/mnt/flair/flair/trainers/trainer.py", line 341, in train > self.model.save(base_path / "best-model.pt", pickle_module=self.pickle_module) > File "/mnt/flair/flair/nn.py", line 86, in save > self.save_torch_model(model_state, str(model_file), pickle_module) > File "/mnt/flair/flair/nn.py", line 76, in save_torch_model > torch.save(model_state, str(model_file), pickle_protocol=pickle_protocol) > File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 224, in save > return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol)) > File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 149, in _with_file_like > return body(f) > File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 224, in <lambda> > return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol)) > File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 297, in _save > pickler.dump(obj) > TypeError: can't pickle SwigPyObject objects > ``` > > This error message does not appear, when e.g. using the GPT1 embeddings. Do you have any hint, what's going wrong here 🤔 Hi, have you solved this issue? I'm trying to implement XLMRoberta in my model based on flair and meet the same issue when saving the model.
transformers
710
closed
A way to increase input length limitation?
Hi, Is there a way to increase input length limitation of 512 tokens? Maybe something to change in the code?
06-21-2019 12:33:32
06-21-2019 12:33:32
No way as far as I can tell, this is a fundamental limitation for absolute position pre-trained models (i.e. BERT, GPT, GPT-2)<|||||>Okay thank you!
transformers
709
closed
layer_norm_eps
In [modeling.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/c304593d8fa93f25febe1458c63497a846749c89/pytorch_pretrained_bert/modeling.py#L303) why `self.layer_norm_eps` is written even config don't have these parameter. Check [here](https://github.com/google-research/bert#pre-trained-models) Or I am missing something.
06-21-2019 11:37:50
06-21-2019 11:37:50
Because some people wanted to configure this: https://github.com/huggingface/pytorch-pretrained-BERT/pull/585<|||||>So I have to add `config.layer_norm_eps = 1e-12` if I am taking the config from the link above ?<|||||>You don't need to, it's the default value when instantiating a `BertConfig` class.<|||||>When I printed `dir(bert_config)` I am not able to see layer_norm_eps. <|||||>I have figured out the problem. I have to initialize like this `BertConfig.from_json_file(config_file_path)` but rather I have done like this `BertConfig(config_file_path)`. My bad :disappointed: Thanks for the reply. Closing the issue.
transformers
708
closed
Future attention masking in GPT/GPT-2?
Based on my understanding, [this](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling_gpt2.py#L288) is the place where future attention masking for the causal model happens. If this is the case - Why is it called `bias`? - Why is this in the core of `Attention` module and not passed as a `attention_mask` parameter similar to BERT?
06-20-2019 22:00:07
06-20-2019 22:00:07
Hi Shubham, This is a legacy from the original Tensorflow code (https://github.com/openai/finetune-transformer-lm/blob/master/train.py#L64-L69).<|||||>Thanks for the link to the original reference.
transformers
707
closed
Update run_squad.py
model = BertForQuestionAnswering.from_pretrained(args.bert_model) is written twice. I think the else part is redundant there
06-20-2019 19:15:44
06-20-2019 19:15:44
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707?src=pr&el=h1) Report > Merging [#707](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c304593d8fa93f25febe1458c63497a846749c89?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #707 +/- ## ======================================= Coverage 62.27% 62.27% ======================================= Files 18 18 Lines 3979 3979 ======================================= Hits 2478 2478 Misses 1501 1501 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707?src=pr&el=footer). Last update [c304593...620f2c1](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Yes, that's for clarity
transformers
706
closed
Update run_squad.py
redundant else part, model = BertForQuestionAnswering.from_pretrained(args.bert_model) is already written in a different line
06-20-2019 18:59:30
06-20-2019 18:59:30
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=h1) Report > Merging [#706](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c304593d8fa93f25febe1458c63497a846749c89?src=pr&el=desc) will **decrease** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #706 +/- ## ========================================== - Coverage 62.27% 62.22% -0.06% ========================================== Files 18 18 Lines 3979 3979 ========================================== - Hits 2478 2476 -2 - Misses 1501 1503 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `82.44% <0%> (-1.07%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=footer). Last update [c304593...8910034](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||># [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=h1) Report > Merging [#706](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c304593d8fa93f25febe1458c63497a846749c89?src=pr&el=desc) will **decrease** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #706 +/- ## ========================================== - Coverage 62.27% 62.22% -0.06% ========================================== Files 18 18 Lines 3979 3979 ========================================== - Hits 2478 2476 -2 - Misses 1501 1503 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `82.44% <0%> (-1.07%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=footer). Last update [c304593...8910034](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
705
closed
Implementing XLNet in pytorch
Due to the new work of [XLNet](arxiv.org/abs/1906.08237 ) and the implementation of it in tensorflow maybe we need to add this one to the current repository. https://github.com/zihangdai/xlnet
06-20-2019 04:26:24
06-20-2019 04:26:24
FYI @roholazandie we are currently working on XLNet with pytorch over here https://github.com/pingpong-ai/XLNet-pytorch/tree/dev/poc<|||||>I'll add it here also. I was working on a coming release this week anyway. It's a mix of BERT/Transformer-XL and something I was also playing with (Two-Stream Self-Attention) so hopefully, it won't delay too much the release. <|||||>Awesome, downstream libraries like [flair](https://github.com/zalandoresearch/flair) are really looking forward to use XLNet 🤗 (So expect results on CoNLL and PoS tagging whenever XLNet is implemented here)<|||||>and [jiant](https://jiant.info)!<|||||>I had finished Simple XLNet implementation with Pytorch Wrapper here : https://github.com/graykode/xlnet-Pytorch<|||||>How is this project going? I'm debating whether to try to build XLNet into jiant directly, but I'm not eager to replicate your hard work. Tx!<|||||>Yes we're on track to finish the release this week I think (or next Monday in the worse case). We reproduced the results of XLNet on STS-B (Pearson R > 0.918), the GLUE task showcased on the TF repo, with the same hyper-parameters (didn't try the others tasks but the model is the same for all). It's taking a bit more time than planned because we took the opportunity to refactor the library's back-end. I'm really excited about the new release. We will now have 6 different architectures (BERT, GPT, GPT-2, Transformer-XL, XLNet and XLM) and over 25 associated pretrained weights, all with the same API (for instance the GLUE training script is now the same for all the models!) and direct access to all the models' internals. And a lot of other things (a lot of tests to avoid future regressions, compatibility with TorchScript, easy serialization and loading of fine-tuned models...)<|||||>> I'll add it here also. I was working on a coming release this week anyway. > It's a mix of BERT/Transformer-XL and something I was also playing with (Two-Stream Self-Attention) so hopefully, it won't delay too much the release. @thomwolf , very much looking forward to your implementation of BERT/Transformer-XL. Wondering if you were planning to release that too, and if so where you are planning to make it available. Thanks so much <|||||>Is there any direct wrapper for QA task using XLNet?<|||||>I appriciate your hard work. I just saw it today, and you almost implemented. Is it really better than Bert, which i find to be amazing so far?<|||||>Under the conditions in their paper, yes. Question is whether it always is: in what tasks, using the same data set; with same compute time (or energy consumption) for training (+on what hardware as same speed on hardware X doesn't imply same speed on hardware Y); with same number of parameters (model size); with architecture size for same prediction speed; consistently / on average (+confidence interval) for repeated runs with random initialisation...
transformers
704
closed
Adjust s3 german Bert file storage
As suggested, keeping model and config files on our s3. Thanks
06-19-2019 16:43:09
06-19-2019 16:43:09
transformers
703
closed
"Received 'killed' signal" during the circleci python3 build after submitting PR
I submitted a PR after modifying convert_gpt2_checkpoint_to_pytorch.py, and my build_py2 test passed, but I received a very vague error from build_py3 (as written in the title of this issue) that caused my build to fail. Does anyone have any ideas as to where the issue could be? Edit: Attached image below <img width="625" alt="Screen Shot 2019-06-19 at 3 48 25 PM" src="https://user-images.githubusercontent.com/39981081/59795692-c8f2e280-92a9-11e9-8aa1-77cb96c2db33.png">
06-19-2019 15:18:33
06-19-2019 15:18:33
Yeah I've removed the memory-heavy tests<|||||>Thanks!
transformers
702
closed
Add an argument --model_size to convert_gpt2_checkpoint_to_pytorch.py
Add an argument --model_size to convert_gpt2_checkpoint_to_pytorch.py that lets the user specify whether they want to convert a checkpoint from the 117M model or from the 345M model.
06-19-2019 15:04:07
06-19-2019 15:04:07
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Thanks I don't think we'll add this since converted models are already provided.
transformers
701
closed
Low SQuADv2 F1 & EM Score
Hi, I used the default settings and run_squad.py script (with the exception of a batch size of 4 since my GPU has low memory) to train for 3 epochs. Turns out I got an EM & F1 score of 43% and 48% respectively. AvNA looks decent at 65%. Is this due to a small number of epochs or the small batch size? Note that training for 3 epochs on my single 1070-TI took 12 hours. Thanks.
06-19-2019 14:59:55
06-19-2019 14:59:55
@thomwolf Hi, it seems there is something wrong with the training code in this repo. I used Google's official BERT training code and I could get decent results even with a small batch size: `{'EM': 73.80717341230668, 'F1': 77.11048422305339, 'AvNA': 80.78315235274762}` Here's the settings I used for Google's and this repo's: ``` --vocab_file=uncased_L-12_H-768_A-12/vocab.txt --bert_config_file=uncased_L-12_H-768_A-12/bert_config.json --init_checkpoint=uncased_L-12_H-768_A-12/bert_model.ckpt --do_train=True --train_file=../squad/data/train-v2.0.json --do_predict=True --predict_file=../squad/data/dev-v2.0.json --train_batch_size=5 --learning_rate=3e-5 --num_train_epochs=2.0 --max_seq_length=384 --doc_stride=128 --output_dir=gugel_bert --version_2_with_negative=True --do_lower_case=True ``` ``` --bert_model=bert-base-uncased --output_dir=try_bert_2 --train_file=data/train-v2.0.json --predict_file=data/dev-v2.0.json --do_train --do_predict --do_lower_case --train_batch_size=5 --predict_batch_size=5 --num_train_epochs=2.0 --learning_rate=3e-5 --version_2_with_negative ``` EDIT: Forgot to say that I also can get the expected results using SQuAD v1.1 with this repo's code.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
700
closed
Add an argument --model_size to convert_gpt2_checkpoint_to_pytorch.py
Add an argument --model_size to convert_gpt2_checkpoint_to_pytorch.py that lets the user specify whether they want to convert a checkpoint from the 117M model or from the 345M model so that they don't have to create their own 345M json config file.
06-19-2019 14:49:43
06-19-2019 14:49:43
transformers
699
closed
Fine tuning GPT-2 for LM objective function
I'm looking for finetuning GPT-2 parameters for a custom piece of text, so that the weights are tuned for this piece of text, building from the initial model. The script here does it for the original tensorflow implementation: https://github.com/nshepperd/gpt-2 , could you please give me suggestions on how to do this finetuning from the Pytorch version and subsequently use it for text generation? It'd be of great help, thanks!
06-19-2019 07:21:08
06-19-2019 07:21:08
Seems like this is now possible with last week's [merged PR](https://github.com/huggingface/pytorch-pretrained-BERT/pull/597), but I'm curious to see what the core devs say about this as well (btw, keep up the great work!)<|||||>I have the same question :) I have tried the codes for BERT finetuning which is in lm-finetuning folder but looking for the same script for gpt-2. Thanks<|||||>Yes fine-tuning GPT-2 is fixed with #597 indeed. I'll see if I can add an example but basically changing `gpt` to `gpt-2` in the gpt example should be pretty much fine.<|||||>@thomwolf Thanks for the great work. just wondering in order to do unsupervised LM fine-tuning (not classification) on a new dataset, should we just modify run_openai_gpt.py or is there an existing script for that?<|||||>No existing script for that but you can start from run_openai indeed and use just the `OpenAIGPTLMHeadModel`. If you want to supply another example, happy to welcome a PR<|||||>I am still confused as to how to use the run_openai_gpt.py to finetune gpt2 model. A short example would be helpful<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
698
closed
convert_gpt2_checkpoint_to_pytorch dimensions assertion error
I finetuned a GPT-2 model using TensorFlow (using https://github.com/nshepperd/gpt-2), and I tried to run the TF to PyTorch conversion script, but I got this error: `Traceback (most recent call last): File "convert_gpt2_checkpoint_to_pytorch.py", line 72, in <module> args.pytorch_dump_folder_path) File "convert_gpt2_checkpoint_to_pytorch.py", line 39, in convert_gpt2_checkpoint_to_pytorch load_tf_weights_in_gpt2(model, gpt2_checkpoint_path) File "/Users/UAC897/Documents/kepler/venv/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling_gpt2.py", line 90, in load_tf_weights_in_gpt2 assert pointer.shape == array.shape AssertionError: (torch.Size([2304]), (3072,))` Does anyone have any ideas?
06-18-2019 17:27:40
06-18-2019 17:27:40
Hey, I tried doing the same and was successful(in running the script, at least). What do you specify as your --gpt2_checkpoint_path as? (hope you have copied your finetuned checkpoints to preloaded 117M)? Update: Currently the output just stores a `config.json` and `pytorch_model.bin`. Suprisingly I don't see the vocab.txt and the merges.txt that they specify in the README. My tensorflow model checkpoint has a vocab.bpe but not merges.txt. So I'm stuck on how to proceed with the following error when running `run_gpt2.py`: ` We assumed '/home/code-base/pytorch-pretrained-BERT/pytorch_pretrained_bert/temp-model/' was a path or url but couldn't find files /home/code-base/pytorch-pretrained-BERT/pytorch_pretrained_bert/temp-model/vocab.json and /home/code-base/pytorch-pretrained-BERT/pytorch_pretrained_bert/temp-model/merges.txt at this path or url.` Since I'm unable to generate samples using these weights now, any ideas would be great!<|||||>I was able to fix my issue. I realized that the GPTConfig constructor used in convert_gpt2_checkpoint_to_pytorch.py is only for the 117M model, while I was trying to convert a 345M model. I ended up just making a new json file for the larger model. I never ran into your issues, however, as I didn't use run_gpt2.py.<|||||>> I was able to fix my issue. I realized that the GPTConfig constructor used in convert_gpt2_checkpoint_to_pytorch.py is only for the 117M model, while I was trying to convert a 345M model. I ended up just making a new json file for the larger model. I never ran into your issues, however, as I didn't use run_gpt2.py. So the new checkpoint files for you contain only a `config.json` and `pytorch_model.bin`?<|||||>> > I was able to fix my issue. I realized that the GPTConfig constructor used in convert_gpt2_checkpoint_to_pytorch.py is only for the 117M model, while I was trying to convert a 345M model. I ended up just making a new json file for the larger model. I never ran into your issues, however, as I didn't use run_gpt2.py. > > So the new checkpoint files for you contain only a `config.json` and `pytorch_model.bin`? Correct<|||||>How did you end up fixing this problem? What json file did you have to create? I'd also like to convert a 345M model and am running into the problem described in this issue.<|||||>@dsonbill Sorry, I honestly don't remember. My work was on the laptop I used at my old job, so I don't have access to the files anymore. You could check my really old fork though and maybe find something useful there.<|||||>Hacky solution by modifying this script: https://raw.githubusercontent.com/huggingface/transformers/master/src/transformers/convert_gpt2_original_tf_checkpoint_to_pytorch.py Before: ```py # Construct model if gpt2_config_file == "": config = GPT2Config() else: config = GPT2Config.from_json_file(gpt2_config_file) model = GPT2Model(config) ``` After: ```py # Construct model config = GPT2Config.from_pretrained('gpt2-medium') # Replace 'gpt2-medium' with whichever model spec you're converting model = GPT2Model(config) ```<|||||>> Hacky solution by modifying this script: https://raw.githubusercontent.com/huggingface/transformers/master/src/transformers/convert_gpt2_original_tf_checkpoint_to_pytorch.py > > Before: > > ```python > # Construct model > if gpt2_config_file == "": > config = GPT2Config() > else: > config = GPT2Config.from_json_file(gpt2_config_file) > model = GPT2Model(config) > ``` > > After: > > ```python > # Construct model > config = GPT2Config.from_pretrained('gpt2-medium') # Replace 'gpt2-medium' with whichever model spec you're converting > model = GPT2Model(config) > ``` Hi, this doesn't work on me. I'm facing the same problem. Does anyone have any ideas?<|||||>> > Hacky solution by modifying this script: https://raw.githubusercontent.com/huggingface/transformers/master/src/transformers/convert_gpt2_original_tf_checkpoint_to_pytorch.py > > Before: > > ```python > > # Construct model > > if gpt2_config_file == "": > > config = GPT2Config() > > else: > > config = GPT2Config.from_json_file(gpt2_config_file) > > model = GPT2Model(config) > > ``` > > > > > > After: > > ```python > > # Construct model > > config = GPT2Config.from_pretrained('gpt2-medium') # Replace 'gpt2-medium' with whichever model spec you're converting > > model = GPT2Model(config) > > ``` > > Hi, this doesn't work on me. I'm facing the same problem. Does anyone have any ideas? I could get past the initial error by setting the config to be hparams.json ```export OPENAI_GPT2_CHECKPOINT_PATH=gpt2/355M transformers-cli convert --model_type gpt2 \ --tf_checkpoint $OPENAI_GPT2_CHECKPOINT_PATH/model.ckpt \ --pytorch_dump_output $OPENAI_GPT2_CHECKPOINT_PATH \ --config $OPENAI_GPT2_CHECKPOINT_PATH/hparams.json ``` which seems to create the model checkpoint. I haven't check to see if the converted model works yet though.
transformers
697
closed
Updating examples
This PR check that the examples are working well (fix learning rate bug in distributed settings) Also: - prepare 2 fine-tuned models on SQuAD (BERT Whole Word Masking) so people can also use fine-tuned models (nice performances: "exact_match": 86.9, "f1": 93.2, better than the original Google AI values) - add a bertology script which showcases: * computing head attention entropy * computing head importance scores according to http://arxiv.org/abs/1905.10650 * performing head masking and head pruning (like masking but you actually remove the weights) according to http://arxiv.org/abs/1905.10650
06-18-2019 14:30:41
06-18-2019 14:30:41
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=h1) Report > Merging [#697](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/3763f8944dc3fef8afb0c525a2ced8a04889c14f?src=pr&el=desc) will **decrease** coverage by `6%`. > The diff coverage is `62.5%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #697 +/- ## ========================================== - Coverage 68.23% 62.22% -6.01% ========================================== Files 18 18 Lines 3976 3979 +3 ========================================== - Hits 2713 2476 -237 - Misses 1263 1503 +240 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/tokenization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uLnB5) | `89.04% <ø> (-3.66%)` | :arrow_down: | | [pytorch\_pretrained\_bert/modeling\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfb3BlbmFpLnB5) | `68.11% <50%> (-12.11%)` | :arrow_down: | | [pytorch\_pretrained\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `79.49% <66.66%> (-9.06%)` | :arrow_down: | | [pytorch\_pretrained\_bert/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `69.79% <66.66%> (-12.05%)` | :arrow_down: | | [pytorch\_pretrained\_bert/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `52.93% <0%> (-6.29%)` | :arrow_down: | | [pytorch\_pretrained\_bert/tokenization\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX29wZW5haS5weQ==) | `75.64% <0%> (-5.7%)` | :arrow_down: | | ... and [2 more](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=footer). Last update [3763f89...411981a](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
696
closed
Split config weights
Split config and weights files for Bert also (was only done for GPT/GPT-2/Transformer-XL. This will: - make the Bert model instantiation faster (no need to untar an archive) - simplify the distributed training (no need to have one archive for each process).
06-18-2019 09:21:40
06-18-2019 09:21:40
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=h1) Report > Merging [#696](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/a6f2511811f08c24184f8162f226f252cb6ceaa4?src=pr&el=desc) will **decrease** coverage by `0.13%`. > The diff coverage is `68.18%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #696 +/- ## ========================================== - Coverage 68.37% 68.23% -0.14% ========================================== Files 18 18 Lines 3990 3976 -14 ========================================== - Hits 2728 2713 -15 - Misses 1262 1263 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/modeling\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfb3BlbmFpLnB5) | `80.21% <100%> (-0.09%)` | :arrow_down: | | [pytorch\_pretrained\_bert/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `81.83% <100%> (-0.08%)` | :arrow_down: | | [pytorch\_pretrained\_bert/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `59.21% <100%> (-0.12%)` | :arrow_down: | | [pytorch\_pretrained\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `88.55% <56.25%> (-0.61%)` | :arrow_down: | | [pytorch\_pretrained\_bert/tokenization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uLnB5) | `92.69% <0%> (+0.91%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=footer). Last update [a6f2511...f964753](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
695
closed
BERT output not deterministic
BERT output is not deterministic. I expect the output values are deterministic when I put a same input, but my bert model the values are changing. Sounds awkwardly, the same value is returned twice, once. That is, once another value comes out, the same value comes out and it repeats. How I can make the output deterministic? let me show snippets of my code. I use the model as below. ``` tokenizer = BertTokenizer.from_pretrained(self.bert_type, do_lower_case=self.do_lower_case, cache_dir=self.bert_cache_path) pretrain_bert = BertModel.from_pretrained(self.bert_type, cache_dir=self.bert_cache_path) bert_config = pretrain_bert.config ``` Get the output like this ``` all_encoder_layer, pooled_output = self.model_bert(all_input_ids, all_segment_ids, all_input_mask) # all_encoder_layer: BERT outputs from all layers. # pooled_output: output of [CLS] vec. ``` pooled_output ``` tensor([[-3.3997e-01, 2.6870e-01, -2.8109e-01, -2.0018e-01, -8.6849e-02, tensor([[ 7.4340e-02, -3.4894e-03, -4.9583e-03, 6.0806e-02, 8.5685e-02, tensor([[-3.3997e-01, 2.6870e-01, -2.8109e-01, -2.0018e-01, -8.6849e-02, tensor([[ 7.4340e-02, -3.4894e-03, -4.9583e-03, 6.0806e-02, 8.5685e-02, ```` for the all encoder layer, the situation is same, - same in twice an once. I extract word embedding feature from the bert, and the situation is same. ``` wemb_n tensor([[[ 0.1623, 0.4293, 0.1031, ..., -0.0434, -0.5156, -1.0220], tensor([[[ 0.0389, 0.5050, 0.1327, ..., 0.3232, 0.2232, -0.5383], tensor([[[ 0.1623, 0.4293, 0.1031, ..., -0.0434, -0.5156, -1.0220], tensor([[[ 0.0389, 0.5050, 0.1327, ..., 0.3232, 0.2232, -0.5383], ```
06-17-2019 23:07:59
06-17-2019 23:07:59
As with all the other issues about Bert being not deterministic (#403, #679, #432, #475, #265, #278), it's likely because you didn't set the model in eval mode to desactivate the DropOut modules: `model.eval()`. I will try to emphasize this more in the examples of the readme because this issue keeps being raised.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> Epoch 1/6 > loss: 2.0674 - bert_loss: 1.0283 - bert_1_loss: 1.0390 - bert_accuracy: 0.6604 - bert_1_accuracy: 0.6650 > > Epoch 2/6 > loss: 1.7190 - bert_loss: 0.8604 - bert_1_loss: 0.8586 - bert_accuracy: 0.7000 - bert_1_accuracy: 0.7081 > > Epoch 3/6 > loss: 1.5244 - bert_loss: 0.7715 - bert_1_loss: 0.7528 - bert_accuracy: 0.7250 - bert_1_accuracy: 0.7424 > > Epoch 4/6 > loss: 1.3203 - bert_loss: 0.6765 - bert_1_loss: 0.6438 - bert_accuracy: 0.7585 - bert_1_accuracy: 0.7741 > > Epoch 5/6 > loss: 1.1102 - bert_loss: 0.5698 - bert_1_loss: 0.5404 - bert_accuracy: 0.7936 - bert_1_accuracy: > 0.8082 - val_loss: 0.7052 - val_bert_loss: 0.3709 - val_bert_1_loss: 0.3343 - val_bert_accuracy: 0.8687 - val_bert_1_accuracy: 0.8803 > Epoch 6/6 > ETA: 0s - loss: 0.9269 - bert_loss: 0.4823 - bert_1_loss: 0.4446 - bert_accuracy: 0.8287 - bert_1_accuracy: 0.8452 > bert_loss: 0.4823 - bert_1_loss: 0.4446 - bert_accuracy: 0.8287 - bert_1_accuracy: 0.8452` I have the same problem in tensorflow and I configured the model in order to consider Dropout only during the training phase (training=True). But I still have random outputs after each prediction. As you can see during the training phase performance gets better so I guess that the problem is on the prediction
transformers
694
closed
Release 0.6.3
Preparing release 0.6.3 - adding Bert whole word masking models - BERTology: - add head masking, head pruning and optional output of multi-head attention output gradients - output all layers hidden states in GPT/GPT-2 - PyTorch Hub: adding and checking all the models - various clean-ups and doc/test improvements
06-17-2019 10:18:31
06-17-2019 10:18:31
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=h1) Report > Merging [#694](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/80684f6f86c13a89fc1e4feac248ef96b013765c?src=pr&el=desc) will **increase** coverage by `1.17%`. > The diff coverage is `95.76%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #694 +/- ## ========================================== + Coverage 67.19% 68.37% +1.17% ========================================== Files 18 18 Lines 3847 3990 +143 ========================================== + Hits 2585 2728 +143 Misses 1262 1262 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `59.33% <ø> (ø)` | :arrow_up: | | [pytorch\_pretrained\_bert/tokenization\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX29wZW5haS5weQ==) | `81.34% <ø> (ø)` | :arrow_up: | | [pytorch\_pretrained\_bert/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX3RyYW5zZm9feGwucHk=) | `32.59% <ø> (ø)` | :arrow_up: | | [pytorch\_pretrained\_bert/tokenization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uLnB5) | `91.78% <ø> (-0.92%)` | :arrow_down: | | [pytorch\_pretrained\_bert/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <ø> (+1.06%)` | :arrow_up: | | [pytorch\_pretrained\_bert/modeling\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfb3BlbmFpLnB5) | `80.3% <93.65%> (+2%)` | :arrow_up: | | [pytorch\_pretrained\_bert/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `81.91% <94.93%> (+2.52%)` | :arrow_up: | | [pytorch\_pretrained\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `89.16% <97.87%> (+0.59%)` | :arrow_up: | | ... and [2 more](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=footer). Last update [80684f6...4447f27](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
693
closed
Have no GPU to train language modelling
Sorry I open this issue, is not issue of this repository. I very appreciate what the authors created this repository, help us to more understand how BERT works and implement on several tasks. So I have a problem with training because I have not GPU to train language modelling, I have Indonesian dataset (about 2GB) that trainable for language modelling using this repo, could anyone help me to train this dataset? If you could help, you have permission to open source or use the model trained. I hope it will be give more models provided and to make NLP community more interest in latest NLP models especially Indonesian. You can email me directly on [email protected] or comment below. Thank you very much
06-17-2019 03:28:51
06-17-2019 03:28:51
[You can train a tensorflow model using google colab for free](https://github.com/google-research/bert#using-bert-in-colab). After training it, you can [convert your tf model to pytorch](https://github.com/huggingface/pytorch-pretrained-BERT#command-line-interface). <|||||>Or use 300 usd credit for google cloud, that you get when you signup i believe.<|||||>Thank you @oliverguhr and @Oxi84 for suggestions. I have tried both methods, using google colabs and GPU as runtime processor, it took about 240hours for every epoch (maybe if I use apex, it will be faster but I think still hundreds of hours), i think it's impossible to run google colabs dozens of days. I got free trial for GCP, unfortunately Google not provide GPU for free trial version. I try training use GCP with 2CPU and 13GB RAM, it take 200thousands of hours training, is sooo long time. Maybe I should reduce corpus size? Thanks<|||||>Or smaller vocabulary. I am pretty sure you can even use TPU on Google cloud, let someone else confirm that.<|||||>@Oxi84 For my classification task, I noticed that training the model with just 40 mb of data will give me already pretty good results. Training with the full 1,5 GB of my dataset improves the results by just 2-3% accuracy. So you might start with a (random) subset of your data and improve the size step by step and see if your scores get better. <|||||>Oh nice insight @oliverguhr , thank you. I will try to reduce training data and train.
transformers
692
closed
Include a reference on in-domain LM pre-training for BERT
From https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/examples/lm_finetuning#introduction > As such, it's hard to predict what effect this step will have on final model performance, but it's reasonable to conjecture that this approach can improve the final classification performance, especially when a large unlabelled corpus from the target domain is available, labelled data is limited, or the target domain is very unusual and different from 'normal' English text. > If you are aware of any literature on this subject, please feel free to add it in here, or open an issue and tag me (@Rocketknight1) and I'll include it. Hi @Rocketknight1 this paper https://arxiv.org/pdf/1905.05583.pdf studies within-task and within-domain pre-training for BERT in section 5.4 and they achieve a good boost from it.
06-16-2019 07:24:09
06-16-2019 07:24:09
Ah, thank you very much for this! I'll read over the paper and include it as a reference soon.<|||||>Finally read it once I had some free time at the weekend and added PR #715. Thank you!<|||||>Thank you @Rocketknight1 !<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
691
closed
import class "GPT2MultipleChoiceHead"
06-15-2019 13:19:56
06-15-2019 13:19:56
# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=h1) Report > Merging [#691](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/b3f9e9451b3f999118f2299229bb13f2f691c48f?src=pr&el=desc) will **increase** coverage by `0.16%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #691 +/- ## ========================================== + Coverage 67.08% 67.24% +0.16% ========================================== Files 18 18 Lines 3846 3847 +1 ========================================== + Hits 2580 2587 +7 + Misses 1266 1260 -6 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_pretrained\_bert/\_\_init\_\_.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvX19pbml0X18ucHk=) | `100% <ø> (ø)` | :arrow_up: | | [pytorch\_pretrained\_bert/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX3RyYW5zZm9feGwucHk=) | `32.59% <0%> (+0.18%)` | :arrow_up: | | [pytorch\_pretrained\_bert/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <0%> (+0.53%)` | :arrow_up: | | [pytorch\_pretrained\_bert/optimization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvb3B0aW1pemF0aW9uLnB5) | `74.26% <0%> (+0.73%)` | :arrow_up: | | [pytorch\_pretrained\_bert/tokenization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uLnB5) | `92.69% <0%> (+0.91%)` | :arrow_up: | | [pytorch\_pretrained\_bert/file\_utils.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvZmlsZV91dGlscy5weQ==) | `67.78% <0%> (+1.34%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=footer). Last update [b3f9e94...8289646](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks @vanche!
transformers
690
closed
Transformer XL ProjectedAdaptiveLogSoftmax output fix
Fixes the return value of `ProjectedAdaptiveLogSoftmax` layer for Transformer XL when it is a standard softmax without cutoffs (n_clusters=0).
06-15-2019 02:13:41
06-15-2019 02:13:41
Perfect, thanks @shashwath94!
transformers
689
closed
Failing to run pregenerate_training_data.py & finetune_on_pregenerated.py
Hi, I would like to fine tune BERT using my own data. ``` readonly model=bert-base-multilingual-cased export PYTORCH_PRETRAINED_BERT_CACHE=. #[https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/examples/lm_finetuning](BERT Model Finetuning using Masked Language Modeling objective) # pytorch-pretrained-bert convert_tf_checkpoint_to_pytorch $bert_model_home/multi_cased_L-12_H-768_A-12/{bert_model.ckpt,bert_config.json} bert-base-multilingual-cased #zcat --force corpora/*.{en,fr} > my_corpus.txt #zcat --force corpora/unannotated_seq.{en,fr} > my_corpus.txt mkdir -p training #Pregenerating training data python3 pytorch-pretrained-BERT/examples/lm_finetuning/pregenerate_training_data.py \ --train_corpus my_corpus.txt \ --bert_model $model \ --output_dir training/ \ --epochs_to_generate 3 \ --max_seq_len 256 mkdir -p finetuned_lm #Training on pregenerated data python3 pytorch-pretrained-BERT/examples/lm_finetuning/finetune_on_pregenerated.py \ --pregenerated_data training/ \ --bert_model $model \ --output_dir finetuned_lm/ \ --epochs 3 ``` Then I get ``` Model name 'bert-base-multilingual-cased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-vocab.txt' was a path or url but couldn't find any file associated to this path or url. Traceback (most recent call last): File "pytorch-pretrained-BERT/examples/lm_finetuning/pregenerate_training_data.py", line 338, in <module> main() File "pytorch-pretrained-BERT/examples/lm_finetuning/pregenerate_training_data.py", line 293, in main vocab_list = list(tokenizer.vocab.keys()) AttributeError: 'NoneType' object has no attribute 'vocab' No training data was found! ``` ``` wc my_corpus.txt 390400 my_corpus.txt ``` Why is `bert-base-multilingual-cased` not found in `(bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese)` when it is clearly there? Why am I getting a NoteType for the vocab, I thought it was supposed to be autodownloaded if missing. What does `bert-base-multilingual-cased` represent? Should I have converted beforehand a tf model to a pytorch model and named it `bert-base-multilingual-cased` represent?
06-14-2019 21:09:57
06-14-2019 21:09:57
Your server probably can't reach AWS to download the models. I need to make these error messages more clear, they currently gather several failure cases. Will do that in the coming release of next week.<|||||>What should I do if I have downloaded the package manually?<|||||>If you download them manually, you will have to figure out what is their file name in the cache. My problem was that on the cluster I'm using, worker nodes don't have access to the internet but the interactive nodes do. It's when I was tracing the code on an interactive node that I downloaded the models. Then, to run on a worker node, I define `export PYTORCH_PRETRAINED_BERT_CACHE=$BERT_MODEL_HOME/pytorch_pretrained_bert` which obviously is pointing to my cache. Back to downloading manually, you will have to "properly" name your model. Looking at my cache, I see ` a803ce83ca27fecf74c355673c434e51c265fb8a3e0e57ac62a80e38ba98d384.681017f415dfb33ec8d0e04fe51a619f3f01532ecea04edbfd48c5d160550d9c ` which is actually `bert-base-cased.tar.gz`. The name of the model's file is base on some web id (sorry not familiar with this).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
688
closed
Add German Bert model to code, update readme
We have been training a German BERT model from scratch on some 12 GB of clean text. It outperforms the multilingual BERT (cased + uncased) in 4 out of 5 German NLP tasks. ![deepset_performance](https://user-images.githubusercontent.com/3264870/59521557-d9cbde80-8ecc-11e9-87cf-8af9f141201c.png) Furthermore we evaluated the pre-training not just by observing the train loss, but continuous downstream task checks. For more details on our experiments you can check our official post here: https://deepset.ai/german-bert Code-wise we only had to do very few adoptions to the dictionaries for model, vocab and sequence length. We also included our evaluation post into the README, because we believe it might be interesting for the community.
06-14-2019 15:59:19
06-14-2019 15:59:19
This looks great @Timoeller – do you have an estimate for the compute power you used to train your model? UPDATE. Ok the answer is in the blogpost: https://deepset.ai/german-bert > We trained using Google's Tensorflow code on a single cloud TPU v2 with standard settings. > We trained 840k steps with a batch size of 1024 for sequence length 128. Training took about 9 days. <|||||>Sorry, I just realized we made a wrong oversimplification. We of course trained in the end for 30k steps on a longer batch size. I updated the article accordingly: We trained 810k steps with a batch size of 1024 for sequence length 128 and 30k steps with sequence length 512. Training took about 9 days. <|||||>Looks great, thanks a lot @Timoeller!<|||||>@Timoeller (and @tholor also I guess): in the coming release 0.6.3, I'm switching to a split file format for Bert (like already done in GPT/GPT-2/Transformer-XL) in which we separately store config and weights files on the S3 to avoid having to untar an archive at each instantiation of the model. In the short term I'll be storing your model's files on our s3 but you can also split the archive yourself and I can switch back to your s3 if you would like to.<|||||>Correctly guessed. @tholor and me are working together. I also created another PR for the updated file locations.<|||||>Hello, could you please share how did you generate the vocab list? <|||||>This is the code how we did it. There was a special "," symbol at index 0, which got used unintentionally as padding token. So we swapped the first two strings in the vocab.txt and created a "[unused3001]". See also the discussion in: https://github.com/huggingface/pytorch-transformers/issues/778 Hope that helps. ``` spm.SentencePieceTrainer.Train( f'--input={INPUT_FILE} --model_prefix={TEMP_FILE} --vocab_size={VOCAB_SIZE} --character_coverage=1.0 --model_type=bpe') df = pd.read_csv(TEMP_FILE + ".vocab", sep="\t", # use a char for separation that cannot be inside vocab header=None, names=["vocab", "unk"], encoding='utf-8', dtype=str, quotechar="\r", # use a char for quoting that cannot be inside vocab engine='python') vocab = df.vocab.values print(vocab.shape) print(len(vocab)) for i, current in enumerate(vocab): current = str(current) if current.startswith("▁"): vocab[i] = current[1:] else: vocab[i] = "##" + current unused = [] for i in range(1, UNUSED_TOKENS + 1): unused.append("[unused%i]" % i) toadd = np.array(["[PAD]", "[UNK]", "[CLS]", "[SEP]", "[MASK]"]) vocab = np.concatenate((toadd, vocab[3:], unused), axis=0) ```<|||||>@Timoeller thanks for the open sourcing the model :+1: and the great [FARM](https://github.com/deepset-ai/FARM) library. I trained a German BERT model from scratch a while ago (16GB of text, incl. some WMT monolingual data for German) and here are some preliminary results: | Task | Result | --------- | ------- | CoNLL-2003 | 85.49 | GermEval | 84.38 | GermEval18Coarse | 74.60 (reproduced result for German BERT was 74.06) So on average the model is +0.48% better. My question to @Timoeller and @thomwolf does it make sense to include another cased German BERT model 🤔 <|||||>Hey @stefan-it Thanks for linking our library and also sharing your results. We have made the experience that multiple downstream runs vary in performance by some degree. We are also in contact with people applying German Bert for germeval19, using an ensemble of multiple downstream runs with quite good results (the official ranking isnt out yet though). Concerning the performance of your Bert model: Your NER results seem to be consistently better than with our German Bert. Maybe it lacks behind in the other tasks? How about we both have a call this week to see where the differences of our Berts come from and if it makes sense to include your model as well? Just pm me. <|||||>Awesome, I just send you an email :)