repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
1,187
closed
Using do_eval from run_glue.py uses the cached result
## ❓ Questions & Help Using do_eval from run_glue.py uses the cached result. I want to evaluate my fine-tuned models and I can't find any guide on how to do so. Anybody that can point me in the right direction?
09-03-2019 15:18:52
09-03-2019 15:18:52
If you simply remove the cached file or move it elsewhere, it won't be used by run_glue.py. That's what I have been doing.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,186
closed
[README] link to Write With Transformer
Tomorrow we'll release a new version of Write With Transformer that's gonna let you: - experiment with different models (gpt2, xlnet) and/or model checkpoints (example for gpt2: small, large, arxiv) - Share links to your documents. With those two changes transfomer.huggingface.co is graduating to being an official demo for `pytorch-transformers`'s text generation capabilities.
09-03-2019 14:30:03
09-03-2019 14:30:03
transformers
1,185
closed
XLnet output attentions doesn't work
## 🐛 Bug Model I am using - XLNet. Language I am using the model on English. The problem arise when using the flag `output_attentions=True` ## To Reproduce ## Expected behavior When executing ` outputs= self.model(input_ids) ` I would expect to have a tuple with outputs and attentions but it fails. The problem probably roots from the lines: ` if self.output_attentions: attentions.append(outputs[2]) ` I receive Nones instead of the attentions, or sometimes the error: `IndexError: tuple index out of range` For the following line in the forward function: `attentions.append(outputs[2])` ## Environment * OS: Windows * Python version: 3.7.3 * PyTorch version: 1.1.0 * PyTorch Transformers version (or branch): Master branch version from 31.08.19 * Using GPU: Yes
09-03-2019 11:32:20
09-03-2019 11:32:20
why did you close it?<|||||>I'm checking, it might be my own but.
transformers
1,184
closed
Convert RoBERTa to TF checkpoint
Can we use the "convert_pytorch_checkpoint_to_tf" script to convert the RoBERTa checkpoint to the Tensorflow ckpt? Thanks.
09-03-2019 06:50:00
09-03-2019 06:50:00
Hello, I believe that unfortunately the script currently only works for the `BertModel` base class. You would have to create a similar script for RoBERTa, it shouldn't be too different as both models have very similar architectures!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,183
closed
'DistilBertModel' object has no attribute 'init_weights'
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): DistilBertModel Language I am using the model on (English, Chinese....): English The problem arise when using: * [x] the official example scripts: I am trying to run the sample in the given examples within a notebook. Specifically the example in [DistilBERT's Example README](https://github.com/huggingface/pytorch-transformers/tree/master/examples/distillation) ## To Reproduce Steps to reproduce the behavior: 1. Create a new notebook with a Python 3.7 interpretter 2. Type in the following code: ``` !pip install pytorch-transformers import torch from pytorch_transformers.tokenization_distilbert import DistilBertTokenizer from pytorch_transformers.modeling_distilbert import DistilBertModel tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') model = DistilBertModel.from_pretrained('distilbert-base-uncased') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) outputs = model(input_ids) last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple ``` 3. Run & receive the error: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-48-3aa5cae06e9c> in <module> 1 tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') ----> 2 model = DistilBertModel.from_pretrained('distilbert-base-uncased') 3 4 input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) 5 outputs = model(input_ids) /mnt/c/Users/.../venv/lib/python3.7/site-packages/pytorch_transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 472 assert model.config.output_attention == True 473 # Loading from a TF checkpoint file instead of a PyTorch model (slower) --> 474 config = BertConfig.from_json_file('./tf_model/my_tf_model_config.json') 475 model = BertModel.from_pretrained('./tf_model/my_tf_checkpoint.ckpt.index', from_tf=True, config=config) 476 /mnt/c/Users/.../venv/lib/python3.7/site-packages/pytorch_transformers/modeling_distilbert.py in __init__(self, config) 486 self.transformer = Transformer(config) # Encoder 487 --> 488 self.init_weights() 489 490 def _resize_token_embeddings(self, new_num_tokens): /mnt/c/Users/.../venv/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name) 589 return modules[name] 590 raise AttributeError("'{}' object has no attribute '{}'".format( --> 591 type(self).__name__, name)) 592 593 def __setattr__(self, name, value): AttributeError: 'DistilBertModel' object has no attribute 'init_weights' ``` ## Expected behavior Code would execute. Not sure if init_weights should be inherited or if it's a type (there is [_init_weights ](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_distilbert.py#L403)in the DistilBertTokenizer class) ## Environment * OS: Win10 (with WSL) * Python version: 3.7.4 * PyTorch version: 1.2 * PyTorch Transformers version (or branch): mainline * Using GPU ? no * Distributed of parallel setup ? parallel
09-03-2019 06:20:34
09-03-2019 06:20:34
Hello, and thank you for the bug report! It would seem you are using an outdated version of the master branch. Could you update it to the latest and tell me if the error remains?<|||||>Looks good! I pulled the latest pip package and that worked as well. Thanks.
transformers
1,182
closed
Updated GLUE script. New feature: Binary mask creation from the tokenizer's encoding.
The new `tokenizer.encode(seq_0, seq_1, add_special_tokens=True)` method makes life easier when building sequences. However, it makes it harder to create binary masks as the different sequence lengths are unknown. As a feature, I have therefore added a flag to the encode function so that it can output binary masks. Example: ```py from pytorch_transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-cased") seq_0 = "This is the one" seq_1 = "This is the last" input_ids, mask = tokenizer.encode(seq_0, seq_1, add_special_tokens=True, output_mask=True) # input_ids: [ 101, 1188, 1110, 1103, 1141, 102, 1188, 1110, 1103, 1314, 102] # mask: [ 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1] ``` It works for BERT, RoBERTa, XLM, and XLNet. I have refactored the GLUE example with this method. It greatly simplifies input creation. I have added an additional unit test to the `commontests` suite. Furthermore, in order to make sure the tokenization was correct I compared against the original input creation of the GLUE script to make sure every encoded sequence remained the same.
09-03-2019 02:00:13
09-03-2019 02:00:13
# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=h1) Report > Merging [#1182](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/0d1dad6d5323cf627cb8d7ddd428856ab8475f6b?src=pr&el=desc) will **increase** coverage by `0.27%`. > The diff coverage is `94.57%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1182 +/- ## ========================================== + Coverage 80.77% 81.04% +0.27% ========================================== Files 57 57 Lines 8092 8229 +137 ========================================== + Hits 6536 6669 +133 - Misses 1556 1560 +4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3hsbmV0LnB5) | `89.65% <100%> (+0.46%)` | :arrow_up: | | [...h\_transformers/tests/tokenization\_tests\_commons.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3Rlc3RzX2NvbW1vbnMucHk=) | `100% <100%> (ø)` | :arrow_up: | | [pytorch\_transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2JlcnQucHk=) | `95.75% <100%> (+0.08%)` | :arrow_up: | | [...ytorch\_transformers/tests/tokenization\_xlm\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3hsbV90ZXN0LnB5) | `97.72% <100%> (ø)` | :arrow_up: | | [pytorch\_transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3hsbS5weQ==) | `82.98% <100%> (+0.28%)` | :arrow_up: | | [...ch\_transformers/tests/tokenization\_roberta\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3JvYmVydGFfdGVzdC5weQ==) | `92.45% <100%> (ø)` | :arrow_up: | | [...transformers/tests/tokenization\_distilbert\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX2Rpc3RpbGJlcnRfdGVzdC5weQ==) | `95.23% <100%> (ø)` | | | [pytorch\_transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3JvYmVydGEucHk=) | `100% <100%> (ø)` | :arrow_up: | | [...torch\_transformers/tests/tokenization\_bert\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX2JlcnRfdGVzdC5weQ==) | `98.66% <100%> (ø)` | :arrow_up: | | [...orch\_transformers/tests/tokenization\_xlnet\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3hsbmV0X3Rlc3QucHk=) | `97.91% <100%> (ø)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=footer). Last update [0d1dad6...72402d1](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I like it a lot! Could we also update `run_squad` similarly maybe?<|||||>Only need to adapt the SQuAD script with the new encode w/mask as well as DistilBERT and I think we're good to go @julien-c
transformers
1,181
closed
DistilBERT Loss Function Choice and further query on extending to GPT2.
## ❓ Questions & Help Can you describe the motiavtion behind scaling the KLDivLoss by squared temperature ? https://github.com/huggingface/pytorch-transformers/blob/50792dbdcccd64f61483ec535ff23ee2e4f9e18d/examples/distillation/distiller.py#L331 When applying the same logic for GPT-2 distillation, I did the following ``` python def training_step(self, data_batch, batch_i): """ Lightning calls this inside the training loop :param data_batch: :return: """ # forward pass token_ids, lengths = data_batch orig_loss_ce, s_logits = self.student(input_ids=token_ids, labels=token_ids)[:2] # (bs, seq_length, voc_size) self.teacher.eval() # Required to do this every time. with torch.no_grad(): t_logits = self.teacher(input_ids=token_ids)[0] # (bs, seq_length, voc_size) loss_kl = self.kld_loss_fct(F.log_softmax(s_logits/self.temperature, dim=-1), F.softmax(t_logits/self.temperature, dim=-1)) * (self.temperature)**2 loss_kl /= s_logits.shape[1] loss = self.alpha_kl * loss_kl if self.alpha_orig_ce > 0.: loss += self.alpha_orig_ce * orig_loss_ce if self.alpha_mse > 0.: loss_mse = self.mse_loss_fct(s_logits, t_logits)/s_logits.size(0) # Reproducing batchmean reduction loss += self.alpha_mse * loss_mse # in DP mode (default) make sure if result is scalar, there's another dim in the beginning if self.trainer.use_dp: loss = loss.unsqueeze(0) output = OrderedDict({ 'loss': loss }) # can also return just a scalar instead of a dict (return loss_val) return output ``` I found that the distillBert implementation lead to high initial loss range for kl (130-180) depending on the average sequence length per batch while cross entropy was in range of 4-5. So I scaled the loss_kl by the total timesteps in the batch. (My batches don't have any masked tokens). Training did converge to similar perplexity as teacher on the held out set of toronto books. Is my method motivated, or am I applying the KL wrongly in the GPT2 scenario necessiating the scaling ?
09-02-2019 14:52:11
09-02-2019 14:52:11
Hello @sai-prasanna, I believe that in the original implementation we release, the Knowledge Distillation loss is batch-averaged meaning that it should not be sensible to the sequence lenghts: `self.ce_loss_fct = nn.KLDivLoss(reduction='batchmean')`. But anyways, you should just make sure that at the end, if your true loss is batch-size-agnostic, then the knowledge distillation loss should be too. Regarding your 1st question, the `T**2` rescaling simply ensures that both the true loss and the distillation loss are of the same magnitude. You can refer to [the original paper](https://arxiv.org/abs/1503.02531), section 2: _"Since the magnitudes of the gradients produced by the soft targets scale as 1/T^2 it is important to multiply them by T^2 when using both hard and soft targets."_ Victor<|||||>Thanks!. I will recheck the loss function ranges more carefully. And I guess I jumped ahead without reading the literature carefully, will revisit the papers. I have a few queries with respect to pre-processing text for the student of GPT-2. (I pm'ed you on twitter, but I guess this place is more accessible to others). Any guesses on how GPT-2 sequences were sampled for training? Did they take any random point in the corpus and sampled from there, or would they select a random token (could be in the middle of a sentence) and continue to fill the sequence from that point? And what of sequence lengths, would they fill up tokens continuously (going across sentence boundaries) till max sequence length? Or would there be variation in sequence lengths and what would be an ideal way to sample the variations? <|||||>> Thanks!. I will recheck the loss function ranges more carefully. And I guess I jumped ahead without reading the literature carefully, will revisit the papers. > > I have a few queries with respect to pre-processing text for the student of GPT-2. (I pm'ed you on twitter, but I guess this place is more accessible to others). > > Any guesses on how GPT-2 sequences were sampled for training? You should refer to the papers [GPT](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) and [GPT2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) (section 2.1/2.2) for a detailed explanation of how the data are processed. > Did they take any random point in the corpus and sampled from there, or would they select a random token (could be in the middle of a sentence) and continue to fill the sequence from that point? In auto-regressive LM (like GPT* for instance), each token (except the last one in the sequence) induce a training signal by having the model predicting the next token. > And what of sequence lengths, would they fill up tokens continuously (going across sentence boundaries) till max sequence length? Or would there be variation in sequence lengths and what would be an ideal way to sample the variations? More generally, the longer the sequences, the better it is (that's one of the thing the RoBERTa paper showed). You want to train on as long dependencies as you can.<|||||>Thanks. Guess gpt 2 also follows gpt's preprocessing. I guess my second point was rather unclear. I understand that gpt 2 does traditional lm. I want to know whether inputs to lm while training, strictly start at sentence starts. "The quick brown cyborg jumped over the lazy sapien. And the cyborg, ..." Or can inputs be like "cyborg jumped over the lazy sapien. and the cyborg, ..." "jumped over the lazy sapien. and the cyborg, ..." Any hypothesis on how varying training data like that would affect generation? Say one always gives context that start properly, then would there be any gain in not giving sentences that start from middle. <|||||>@VictorSanh Experimented with KLDivLoss(reduction='batchmean'). I can confirm that the loss scales with the sequence length. ``` python def test_kl_div_loss(batch, timesteps, hidden, n=10000): loss_fn = nn.KLDivLoss(reduction='batchmean') student_logits = torch.randn(batch, timesteps, hidden) teacher_logits = torch.randn(batch, timesteps, hidden) mean_loss = 0.0 for _ in range(n): mean_loss += loss_fn(F.log_softmax(student_logits, dim=-1), F.softmax(teacher_logits, dim=-1)) mean_loss /= n return mean_loss ``` ``` python In [79]: test_kl_div_loss(batch=10, timesteps=10, hidden=10) Out[79]: tensor(8.4171) In [79]: test_kl_div_loss(batch=10, timesteps=100, hidden=10) Out[79]: tensor(77.5201) In [83]: test_kl_div_loss(batch=10, timesteps=1000, hidden=10) Out[83]: tensor(807.4752) ``` nn.KLDivLoss with batchmean is proportional to total timesteps. And `reduction=mean` is wrong as it averages by the number of classes. In nn.CrossEntropyLoss we flatten the time dimension to batch and then compute cross entropy, this in effect averages the loss across timesteps as the default reduction is 'mean'. So ideally, when computing the KL Div, should we ideally set the reduction='none' and scale the loss by ( 1 / total_actual_non_padding_tokens_in_batch ) ? <|||||>> Thanks. Guess gpt 2 also follows gpt's preprocessing. > > I guess my second point was rather unclear. I understand that gpt 2 does traditional lm. I want to know whether inputs to lm while training, strictly start at sentence starts. > "The quick brown cyborg jumped over the lazy sapien. And the cyborg, ..." > Or can inputs be like > "cyborg jumped over the lazy sapien. and the cyborg, ..." > "jumped over the lazy sapien. and the cyborg, ..." > > Any hypothesis on how varying training data like that would affect generation? Say one always gives context that start properly, then would there be any gain in not giving sentences that start from middle. You could do the second option, I am just not sure whether it fundamentally brings significantly more training signal than the 1st option. Thus we usually do the 1st option. You should have a look at how it is done in GPT/GPT2. Folks at Nvidia have released their pre-processing script for GPT2: see [here](https://github.com/NVIDIA/Megatron-LM/blob/a0368ddf4732bf5b86ab4260f6f4196fdd01d5fb/openwebtext/make_gpt2_dataset.py). > @VictorSanh Experimented with KLDivLoss(reduction='batchmean'). I can confirm that the loss scales with the sequence length. > > ```python > def test_kl_div_loss(batch, timesteps, hidden, n=10000): > loss_fn = nn.KLDivLoss(reduction='batchmean') > student_logits = torch.randn(batch, timesteps, hidden) > teacher_logits = torch.randn(batch, timesteps, hidden) > mean_loss = 0.0 > for _ in range(n): > mean_loss += loss_fn(F.log_softmax(student_logits, dim=-1), F.softmax(teacher_logits, dim=-1)) > mean_loss /= n > return mean_loss > ``` > > ```python > In [79]: test_kl_div_loss(batch=10, timesteps=10, hidden=10) > Out[79]: tensor(8.4171) > In [79]: test_kl_div_loss(batch=10, timesteps=100, hidden=10) > Out[79]: tensor(77.5201) > In [83]: test_kl_div_loss(batch=10, timesteps=1000, hidden=10) > Out[83]: tensor(807.4752) > ``` > > nn.KLDivLoss with batchmean is proportional to total timesteps. And `reduction=mean` is wrong as it averages by the number of classes. > > In nn.CrossEntropyLoss we flatten the time dimension to batch and then compute cross entropy, this in effect averages the loss across timesteps as the default reduction is 'mean'. > > So ideally, when computing the KL Div, should we ideally set the reduction='none' and scale the loss by ( 1 / total_actual_non_padding_tokens_in_batch ) ? What I simply do in the training code is a `student_logits.view(-1, hidden)` so that at the end, it is sequence-length and batch size agnostic (see [here](https://github.com/huggingface/pytorch-transformers/blob/master/examples/distillation/distiller.py#L329) for instance) <|||||>Thanks for taking your time to answer all my queries.
transformers
1,180
closed
DistilBERT baseline
Nice work on DistillBert! I was wondering if you had baseline numbers for the student model trained without the distillation objective? Seems like a very important baseline to understand if distillation was useful.
09-02-2019 14:25:41
09-02-2019 14:25:41
Hello @jeremyasapp, That's a good question! Here are the results on the pre-training solely using MLM training signal (the small model is initialized from the teacher though). ![image](https://user-images.githubusercontent.com/16107619/64210993-c0ef1b80-ce72-11e9-806b-171313e8ae9e.png) This is the same table presented in the blog post to which I added the last row. The drop in performance is consistent across the tasks (except for QNLI or QQP). I believe though that you could slightly improve these figures by continuing the pre-training for a few more epochs. I should also refer you to my answer to [this question](https://medium.com/@weadhsu_77395/can-you-provide-a-comparison-with-the-small-model-that-is-trained-from-scratch-d75001057e15#--responses) in the blogpost. It is also related to the influence of the knowledge distillation loss. (Btw, it's `DistilBERT` with a single `L` 🤣no worries though, I found your issue ;)) Victor<|||||>Hey @VictorSanh, thanks for the response. Definitely non trivial improvement! I actually attempted something similar to you guys this summer, but with less success. It's really great to see that this works :)
transformers
1,179
closed
DistilBERT training is killed because OOM
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I am trying DistilBERT training. The training script (train.py) had gradually consumed CPU memory, and the training was killed because OOM in about one day (the available CPU memory is 96GB). I used one GPU for the training. Do you have any idea? Thanks in advance.
09-02-2019 13:45:52
09-02-2019 13:45:52
@tomohideshibata Could you paste the complete error message here 🤗 I'm currently distilling a model and my RAM usage (system) is ~ 20 GB. GPU usage is ~8 GB on a V-100. If the OOM was caused for your GPU then I would recommend decreasing the batch size (which is 5 by default) :)<|||||>I'm trying to train distibert, but I cannot find the dump.txt which I assume is preprocessed wikipedia and torento corpus datasets. Could someone help? Thanks.<|||||>@stefan-it The error message was just `killed`. GPU memory has no problem. I can make the batch size larger (16). The problem is CPU memory. I have just suspected `tensorboard.add_scalar`. I will try to make the volume of outputted logs smaller. If I find something, I will let you know.<|||||>After 40h of training I could also confirm an increase from 20GB (~20 hours training) to 42GB 🤔 @forjiuzhou When calling the `binarized_data.py` script you have to specify your input corpus via `--file_path`. It points to `data/dump.txt` by default. So just pass your training/preprocessed corpus to the `--file_path` option. <|||||>Yes, I do confirm this bug (?). I am actually scratching my head around this strange behaviour too... so if you actually find the reason, I would more than happy to push an update. @forjiuzhou, indeed @stefan-it is correct! Please replace the file `dump.txt` with you own text dataset. Then, I also recommend that you call `token_counts.py` before training (so that you just do it once).<|||||>I believe we found the bug. It was related to some internal bug in PyTorch: see https://github.com/pytorch/pytorch/issues/24200. I installed PyTorch from source (it is a pretty recent fix, so it's not in the last release yet), tracked the RAM while distilling and the memory usage is more or less constant. I am launching a bigger training right now just to make sure this is really causing the memory leak, if so (and I'll get back to you here), it seems you'll have to compile PyTorch from source for now. Victor<|||||>hey guys, I think the reason is there have too many tensorboard log @tomohideshibata @stefan-it , I stop to save log and then i have more train time now <|||||>I have suppressed tensorboard logs (the block `for param_name, param in self.student.named_parameters():` was commented out in the function `log_tensorboard`), but the CPU memory consumption seemed unchanged. So, I will try the latest PyTorch.<|||||>> After 40h of training I could also confirm an increase from 20GB (~20 hours training) to 42GB 🤔 > > @forjiuzhou When calling the `binarized_data.py` script you have to specify your input corpus via `--file_path`. It points to `data/dump.txt` by default. So just pass your training/preprocessed corpus to the `--file_path` option. Sorry I seem ask the wrong question in this issue. But I actually don't have the access to wikipedia and toronto corpus. And it seems unavailable on internet.<|||||>> I believe we found the bug. > It was related to some internal bug in PyTorch: see [pytorch/pytorch#24200](https://github.com/pytorch/pytorch/issues/24200). > > I installed PyTorch from source (it is a pretty recent fix, so it's not in the last release yet), tracked the RAM while distilling and the memory usage is more or less constant. > I am launching a bigger training right now just to make sure this is really causing the memory leak, if so (and I'll get back to you here), it seems you'll have to compile PyTorch from source for now. > > Victor So I trained a model for ~16hours and observed no increase in RAM over the training. I will update the README to pinpoint this special setup (compiling from source for now) and leave the issue open until the next PyTorch release.<|||||>@VictorSanh I have installed PyTorch from source, and the training is fine. Thanks!<|||||>So PyTorch 1.3 was released yesterday 🔥🎉(and it includes new features I am extremely excited about)! The release includes the bug fixing, so you should be able to use the stable version available on `pip`! (Of course, if you prefer, you can still compile PyTorch from source !)<|||||>> So PyTorch 1.3 was released yesterday 🔥🎉(and it includes new features I am extremely excited about)! > The release includes the bug fixing, so you should be able to use the stable version available on `pip`! > (Of course, if you prefer, you can still compile PyTorch from source !) I tried to install the PyTorch 1.3, but it's still leaking. <|||||>@iamlxb3 do you mind sharing your exact PyTorch configuration? I re-launched the scipts a few days ago w/ `torch==1.4.0` and didn't see memory leak.
transformers
1,178
closed
added tokens may split a normal token into halves
In the tokenizer base class, split_on_token() attempts to split input text by each of the added tokens. Because it uses text.split(tok), it may accidentally split a token at the middle. For example a new token "ht" is added to the vocabulary. Then "light" will be split into "lig" and "". But as "light" is in the original vocabulary, it should be left intact to be processed by self._tokenize(). Hence I'd suggest to replace it with re.split, which will split only at word boundaries ([0-9a-zA-Z_]). But in languages whose word boundaries are different from English, this behavior may be undesirable and the user can revert to the old text.split(). It is controlled by a newly added flag split_added_on_word_boundary.
09-02-2019 13:24:43
09-02-2019 13:24:43
This looks great, thanks a lot @askerlee. Can you add a docstring for the new argument? Ideally also a test in [tokenization_tests_commons.py](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/tests/tokenization_tests_commons.py)<|||||>Thanks @thomwolf . I updated my patch to make it neater. Will close this pull request and submit a new request soon.
transformers
1,177
closed
How to install previous versions of pytorch-transformers
Hi I am using some codes requiring old versions of your work, do you mind telling me how to install old version of this repository? thanks Best Julia
09-02-2019 10:16:43
09-02-2019 10:16:43
You can install an older version using the standard pypi procedure: `pip install pytorch-transformers==$VERSION`. The examples showcased on this repository probably won't work on older versions though.<|||||>HI Thanks a lot, I need though a version which I think was called pretrained_bert , thanks for your help. Best Julia On Mon, Sep 2, 2019 at 5:25 PM Lysandre Debut <[email protected]> wrote: > You can install an older version using the standard pypi procedure: pip > install pytorch-transformers==$VERSION. The examples showcased on this > repository probably won't work on older versions though. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-transformers/issues/1177?email_source=notifications&email_token=AM3GZM62DOTECUG7OSEG57TQHUV5FA5CNFSM4IS35RG2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5WDCZY#issuecomment-527184231>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AM3GZMYYXIZA76ZWLGLSTK3QHUV5FANCNFSM4IS35RGQ> > . > <|||||>I believe you can still install it with `pip install pytorch-pretrained-BERT==$VERSION`!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,176
closed
merge
09-02-2019 04:08:47
09-02-2019 04:08:47
# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=h1) Report > Merging [#1176](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/b6cd856b08e3860e59cc126be86b901ccab4f193?src=pr&el=desc) will **decrease** coverage by `0.67%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1176 +/- ## ========================================== - Coverage 80.67% 79.99% -0.68% ========================================== Files 46 46 Lines 7859 7748 -111 ========================================== - Hits 6340 6198 -142 - Misses 1519 1550 +31 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3hsbmV0LnB5) | `81.98% <0%> (-7.21%)` | :arrow_down: | | [pytorch\_transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdXRpbHMucHk=) | `83.54% <0%> (-5.03%)` | :arrow_down: | | [pytorch\_transformers/tests/modeling\_common\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfY29tbW9uX3Rlc3QucHk=) | `73.07% <0%> (-4.95%)` | :arrow_down: | | [...h\_transformers/tests/tokenization\_tests\_commons.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3Rlc3RzX2NvbW1vbnMucHk=) | `93.26% <0%> (-3.85%)` | :arrow_down: | | [pytorch\_transformers/file\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvZmlsZV91dGlscy5weQ==) | `69.71% <0%> (-0.71%)` | :arrow_down: | | [pytorch\_transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxtLnB5) | `86.73% <0%> (-0.35%)` | :arrow_down: | | [pytorch\_transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `83.84% <0%> (-0.2%)` | :arrow_down: | | [pytorch\_transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfb3BlbmFpLnB5) | `81.84% <0%> (-0.12%)` | :arrow_down: | | [pytorch\_transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `87.98% <0%> (-0.05%)` | :arrow_down: | | [pytorch\_transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZGlzdGlsYmVydC5weQ==) | `96.73% <0%> (-0.04%)` | :arrow_down: | | ... and [5 more](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=footer). Last update [b6cd856...b190482](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
1,175
closed
May I get the details of Bert pre-train procedure?
## ❓ Questions & Help I want to measure the time consume in BERT pre-train procedure. May I ask some questions here: 1. What is the pre-train data? Is that same in paper : BooksCorpus & English Wiki. 2. Is there some code for pre-train can I utilize? 3. May I know how much epochs should I train when pre-train 'base' version model or 'large' version model?
09-02-2019 03:17:59
09-02-2019 03:17:59
Hi, you should refer to the original Bert repository and papers for details on the pretraining: https://github.com/google-research/bert<|||||>Thank you! Get it.
transformers
1,174
closed
Fix byte-level BPE decoding error when using added tokens
This PR fixes a mismatch between regular unicode added tokens and byte-level BPE tokens when doing decoding. Wrong behavior reported in #1133. Also adds regression tests.
09-02-2019 00:29:11
09-02-2019 00:29:11
Great addition! There was just a small issue regarding the way the special tokens were joined if they were not at the beginning of the sentence. I fixed it with my commit. Before, the following code: ```py tok = GPT2Tokenizer.from_pretrained("gpt2") tok.add_tokens(["there my", "name is"]) print(tok.decode(tok.encode("Hi there my name is Lysandre"))) ``` would output: ``` Hithere myname is Lysandre ``` Now it outputs: ``` Hi there my name is Lysandre ```<|||||># [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=h1) Report > Merging [#1174](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/b6cd856b08e3860e59cc126be86b901ccab4f193?src=pr&el=desc) will **increase** coverage by `0.17%`. > The diff coverage is `94.73%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1174 +/- ## ========================================== + Coverage 80.67% 80.84% +0.17% ========================================== Files 46 46 Lines 7859 7874 +15 ========================================== + Hits 6340 6366 +26 + Misses 1519 1508 -11 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [...h\_transformers/tests/tokenization\_tests\_commons.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3Rlc3RzX2NvbW1vbnMucHk=) | `100% <100%> (+2.88%)` | :arrow_up: | | [pytorch\_transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3V0aWxzLnB5) | `80.47% <93.33%> (-0.13%)` | :arrow_down: | | [pytorch\_transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3RyYW5zZm9feGwucHk=) | `34.17% <0%> (+0.28%)` | :arrow_up: | | [pytorch\_transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `96.69% <0%> (+0.82%)` | :arrow_up: | | [pytorch\_transformers/file\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvZmlsZV91dGlscy5weQ==) | `71.83% <0%> (+1.4%)` | :arrow_up: | | [pytorch\_transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdXRpbHMucHk=) | `90.02% <0%> (+1.45%)` | :arrow_up: | | [...orch\_transformers/tests/tokenization\_utils\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3V0aWxzX3Rlc3QucHk=) | `96% <0%> (+4%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=footer). Last update [b6cd856...31d3373](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Great thanks! Merging now.
transformers
1,173
closed
Write with Transformer doesn't show 774M model?
According to [this twitter post](https://twitter.com/huggingface/status/1166368535221870592), Write With Transformer has been updated to include the Large model. However, the Model & Decoder Settings only shows Small and Medium options for model size.
09-01-2019 14:26:48
09-01-2019 14:26:48
(nevermind: refreshing the page fixed it... it's weird though because I've been checking every day!)<|||||>Hi! Your browser cache was probably playing tricks on you :)<|||||>That's exactly what I figured! Forgot the term for it. Kinda sucks that it did so for several days! :( Anyways, I'm having fun with it now so it's all good!<|||||>Glad you like it!!
transformers
1,172
closed
apex fp16 FusedLayerNorm type issues
#564 🐛 Bug I seem to be getting the following error each time I try to train with APEX/fp16 with BERT finetuning. It happened with my own scripts and I also see this with repository's standard `finetune_on_pregenerated.py` which was recently updated. The error diagnostics seem to indicate an issue with the `FusedLayerNorm`. To further confirm: doing a local mod where I replaced the definition of BertLayerNorm with ```BertLayerNorm = torch.nn.LayerNorm``` The change resolves this issue (while, in my case, not noticeably changing the performance).. Apex docs are a bit raw but the most recent set does not suggest to manually manipulate optimizers or layer definitions, perhaps we should just stick to the BertLayerNorm definition as described above? ``` Traceback (most recent call last): File "ash3/tune_bert.py", line 101, in <module> main(sys.argv[1:]) File "ash3/tune_bert.py", line 47, in main pregenerate(init) File "ash3/tune_bert.py", line 85, in pregenerate finetune_on_pregenerated(tune_args) File "/home/madvillain/gitlab/ai/ash3/ash3/finetuning/finetune_on_pregenerated.py", line 292, in main outputs = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next) File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 785, in forward prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output) File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 533, in forward prediction_scores = self.predictions(sequence_output) File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 501, in forward hidden_states = self.transform(hidden_states) File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 483, in forward hidden_states = self.LayerNorm(hidden_states) File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/apex/normalization/fused_layer_norm.py", line 159, in forward input, self.weight, self.bias, self.normalized_shape,self.eps) File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/apex/normalization/fused_layer_norm.py", line 25, in forward input_, ctx.normalized_shape, weight_, bias_, ctx.eps) RuntimeError: expected scalar type Half but found Float (data<c10::Half> at /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/include/ATen/core/TensorMethods.h:1386) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7f6af587edc5 in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::Half* at::Tensor::data<c10::Half>() const + 0x2c6 (0x7f6abeb8aa36 in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) frame #2: cuda_layer_norm(at::Tensor*, at::Tensor*, at::Tensor*, at::Tensor*, int, int, c10::ArrayRef<long>, at::Tensor*, at::Tensor*, double) + 0x3ed (0x7f6abeb87dcd in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) frame #3: layer_norm_affine(at::Tensor, c10::ArrayRef<long>, at::Tensor, at::Tensor, double) + 0x27a (0x7f6abeb7985a in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) frame #4: <unknown function> + 0x196c4 (0x7f6abeb866c4 in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) frame #5: <unknown function> + 0x16e0a (0x7f6abeb83e0a in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) <omitting python frames> frame #12: THPFunction_apply(_object*, _object*) + 0x691 (0x7f6b24b0a081 in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/lib/libtorch_python.so) ``` Model I am using (Bert, XLNet....): BERT Language I am using the model on (English, Chinese....): English The problem arise when using: * [* ] the official example scripts: (give details) * [ ] my own modified scripts: (give details) The tasks I am working on is: * [* ] an official GLUE/SQUaD task: (give the name) finetune_on_pregenerated.py * [ ] my own task or dataset: (give details) ## Expected behavior no failures ## Environment * OS: Ubuntu 18.04 * Python version: 3.6 * PyTorch version: 1.1.0, 1.2.0 * PyTorch Transformers version (or branch): 1.1.0 * Using GPU ? yes * Distributed of parallel setup ? no * Any other relevant information: cudatoolkit 10.0, APEX git hash code: 53eae1986320d016ee7b347d78839dd5e96e7e93
09-01-2019 11:45:28
09-01-2019 11:45:28
Yes, that's what we do now on master since #1089 (switching back to `torch.nn.LayerNorm`). Thanks for reporting<|||||>@thomwolf yes, thank you for your response! I wanted to clarify; if I do fp16 I still see that master is doing ``` try: from apex.normalization.fused_layer_norm import FusedLayerNorm as BertLayerNorm except (ImportError, AttributeError) as e: logger.info("Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex .") BertLayerNorm = torch.nn.LayerNorm ``` https://github.com/huggingface/pytorch-transformers/commit/bdb4409ed8de4d199907c75832398f2c49a564e1 and in my case `FusedLayerNorm` seem to cause the issue... so maybe we are talking about different things. Or did you mean that this is a work in progress and it was not merged to master yet?<|||||>Oh indeed, maybe it's a issue with `finetune_on_pregenerated.py`. The scripts in the `lm_finetuning` folder are in the process of being deprecated. You can try with the newly added `run_lm_finetuning.py` which is actively maintained.<|||||>setting `--fp16_opt_level` to O2 resolved that error for me.<|||||>@mksenzov I have the same exact issue. Was wondering if you figured it out?<|||||>I'm getting the same issue using an optimization level of "O1" while running `run_lm_finetuning`. is this expected? "O2" seems to work just fine.<|||||>The problem is that this model in O1 enters to `FusedLayerNorm.forward` with the input in half-precision but its parameters are still in single-precision, and apparently the kernel doesn't support different types (neither does PyTorch's `nn.LayerNorm`). In O2, in contrast, the parameters are changed to half so the issue doesn't occur. I believe there's no reason that `FusedLayerNorm` should be called if apex is available because the user may want to disable apex use O1, but it's incompatible with it. On the contrary, `nn.LayerNorm` [is blacklisted in the amp initialization](https://github.com/NVIDIA/apex/blob/656d14b0c9792a1bcdc255b473dc2d6145d026ff/apex/amp/lists/functional_overrides.py#L42), so its input will always be float32 in O1, while `FusedLayerNorm` is not blacklisted. Plus, `nn.LayerNorm` is probably fused and [proved to be faster on a V100 to me with both float32 and float16](https://github.com/NVIDIA/apex/issues/449#issuecomment-533926319).<|||||>Could we also remove the FusedLayerNorm call in modeling_xlnet?
transformers
1,171
closed
Can't get GPT2tokenizer to load correctly
## ❓ Questions & Help Hi, I'm a fairly new coder and I'm hitting a roadblock that I just don't understand – maybe someone here can help me, but I figure it's worth asking the community. I'm trying to run run_lm_finetuning.py, and the tokenizer doesn't seem to be loading correctly. I'm getting this error: `AttributeError: 'GPT2Tokenizer' object has no attribute 'max_len_single_sentence'` I've looked at the code, and there clearly is a `max_len_single_sentence` attribute in the init, but I can't get to it. I've even tried simply loading a GPT2-tokenizer into a jupyter notebook and trying to get the value, and it has the same error. I assume I've done something wrong, I just can't figure out what. In case it helps, I've put my entire traceback below. Any ideas? Thanks! ``` python examples/run_lm_finetuning.py --train_data_file='HFlongs1000.txt' --output_dir='pytorch-transformers/HFOutput' --model_type='gpt2' --tokenizer_name='gpt2' --model_name_or_path='gpt2' 09/01/2019 06:44:36 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False 09/01/2019 06:44:36 - INFO - pytorch_transformers.modeling_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json from cache at /home/jupyter/.cache/torch/pytorch_transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.085d5f6a8e7812ea05ff0e6ed0645ab2e75d80387ad55c1ad9806ee70d272f80 09/01/2019 06:44:36 - INFO - pytorch_transformers.modeling_utils - Model config { "attn_pdrop": 0.1, "embd_pdrop": 0.1, "finetuning_task": null, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_layer": 12, "n_positions": 1024, "num_labels": 1, "output_attentions": false, "output_hidden_states": false, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "torchscript": false, "vocab_size": 50257 } 09/01/2019 06:44:36 - INFO - pytorch_transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json from cache at /home/jupyter/.cache/torch/pytorch_transformers/f2808208f9bec2320371a9f5f891c184ae0b674ef866b79c58177067d15732dd.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71 09/01/2019 06:44:36 - INFO - pytorch_transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt from cache at /home/jupyter/.cache/torch/pytorch_transformers/d629f792e430b3c76a1291bb2766b0a047e36fae0588f9dbc1ae51decdff691b.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda Traceback (most recent call last): File "examples/run_lm_finetuning.py", line 497, in <module> main() File "examples/run_lm_finetuning.py", line 431, in main args.block_size = tokenizer.max_len_single_sentence # Our input block size will be the max possible for the model AttributeError: 'GPT2Tokenizer' object has no attribute 'max_len_single_sentence'```
09-01-2019 07:01:02
09-01-2019 07:01:02
Hello! The `GPT2Tokenizer` attribute `max_len_single_sentence` is a very new attribute. If you have installed the library prior to [this commit](https://github.com/huggingface/pytorch-transformers/commit/3bcbebd440c220adbaab657f2d13dac7c89f6453#diff-b1c89c3ce1d15ed636ed89d250f8f26a), 9 days ago, then you indeed won't be able to access it. You won't be able to access it either if you have installed it via pypi, as the last release was 1.1.0 and it was before that commit. We'll be releasing v1.2.0 very soon, with this addition! Until then, you can [install it from source](https://github.com/huggingface/pytorch-transformers#from-source) if you want to latest additions.<|||||>Thanks very much! That all makes sense :)
transformers
1,170
closed
How to use BERT or word embedding for e-commerce product classification.
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I want to classify products on an e-commerce site. For example, if we take Amazon for example if a product name is iPhone XS then it should be categorized to Electronics -> mobile, very straight forward, however, the problem comes when we train the model on clothes and many other sports items. For example: "George - George Men's Cargo Short" found on walmart is being classified as SPORTS & OUTDOOR, FASHION. However, it should be classified to Clothes. Currently, we have tried text CNN but I'm very positive that BERT or other word embeddings can enhance the performance. Base Code: https://github.com/brightmart/text_classification However, it appears that TextCNN is better than BERT as per the author of this repository. Does anyone know what's the ideal way to approach this problem? ![image](https://user-images.githubusercontent.com/7957331/64069995-c22b0900-cc24-11e9-8077-21313e38c970.png)
08-31-2019 23:23:18
08-31-2019 23:23:18
you solve this problem?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,169
closed
Attribute errors with pytorch_transformers tests
## 🐛 Bug <!-- Important information --> Model I am using (from the official repo): Language I am using the model on (English, Yoruba,Igbo, Hausa etc): The problem arise when using: * [ ] the official example scripts: (give details): the test run script The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name)=>yes ## To Reproduce Steps to reproduce the behavior: 1.python -m pytest -sv ./pytorch_transformers/tests/ <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> `pytorch_transformers/tests/modeling_transfo_xl_test.py::TransfoXLModelTest::test_transfo_xl_lm_head FAILED pytorch_transformers/tests/modeling_transfo_xl_test.py::TransfoXLModelTest::test_transfo_xl_model FAILED pytorch_transformers/tests/tokenization_xlnet_test.py::XLNetTokenizationTest::test_tokenizer_no_lower PASSED =================================== FAILURES =================================== __________________ TransfoXLModelTest.test_attention_outputs ___________________ self = <pytorch_transformers.tests.modeling_transfo_xl_test.TransfoXLModelTest testMethod=test_attention_outputs> def test_attention_outputs(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: config.output_attentions = True config.output_hidden_states = False model = model_class(config) model.eval() > outputs = model(**inputs_dict) pytorch_transformers/tests/modeling_common_test.py:73: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py:493: in __call__ result = self.forward(*input, **kwargs) pytorch_transformers/modeling_transfo_xl.py:1253: in forward outputs = self._forward(input_ids, mems=mems, head_mask=head_mask) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = TransfoXLModel( (word_emb): AdaptiveEmbedding( (emb_layers): ModuleList( (0): Embedding(10, 32) (1):... LayerNorm(torch.Size([32]), eps=1e-05, elementwise_affine=True) ) ) ) (pos_emb): PositionalEmbedding() ) dec_inp = tensor([[19, 69, 72, 42, 32, 34, 52, 38, 81, 71, 81, 47, 44], [22, 12, 3, 26, 63, 25, 64, 52, 79, 71, 17, 16,... [82, 26, 62, 95, 55, 79, 8, 90, 33, 83, 64, 53, 68], [ 7, 57, 63, 40, 74, 77, 50, 77, 19, 7, 53, 38, 19]]) mems = [tensor([[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0.,... [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]])] head_mask = [None, None, None, None, None] def _forward(self, dec_inp, mems=None, head_mask=None): qlen, bsz = dec_inp.size() # Prepare head mask if needed # 1.0 in head_mask indicate we keep the head # attention_probs has shape bsz x n_heads x N x N # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] (a head_mask for each layer) # and head_mask is converted to shape [num_hidden_layers x qlen x klen x bsz x n_head] if head_mask is not None: if head_mask.dim() == 1: head_mask = head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(0).unsqueeze(0) head_mask = head_mask.expand(self.n_layer, -1, -1, -1, -1) elif head_mask.dim() == 2: head_mask = head_mask.unsqueeze(1).unsqueeze(1).unsqueeze(1) head_mask = head_mask.to(dtype=next(self.parameters()).dtype) # switch to fload if need + fp16 compatibility else: head_mask = [None] * self.n_layer word_emb = self.word_emb(dec_inp) mlen = mems[0].size(0) if mems is not None else 0 klen = mlen + qlen if self.same_length: all_ones = word_emb.new_ones(qlen, klen) mask_len = klen - self.mem_len if mask_len > 0: mask_shift_len = qlen - mask_len else: mask_shift_len = qlen dec_attn_mask = (torch.triu(all_ones, 1+mlen) > + torch.tril(all_ones, -mask_shift_len)).bool()[:, :, None] # -1 E AttributeError: 'Tensor' object has no attribute 'bool' pytorch_transformers/modeling_transfo_xl.py:1145: AttributeError > outputs = model(**inputs) pytorch_transformers/tests/modeling_common_test.py:185: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py:493: in __call__ result = self.forward(*input, **kwargs) pytorch_transformers/modeling_transfo_xl.py:1253: in forward outputs = self._forward(input_ids, mems=mems, head_mask=head_mask) E AttributeError: 'Tensor' object has no attribute 'bool' pytorch_transformers/modeling_transfo_xl.py:1145: AttributeError _________________ TransfoXLModelTest.test_hidden_states_output _________________ self = <pytorch_transformers.tests.modeling_transfo_xl_test.TransfoXLModelTest testMethod=test_hidden_states_output> def test_hidden_states_output(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: config.output_hidden_states = True config.output_attentions = False model = model_class(config) model.eval() > outputs = model(**inputs_dict) pytorch_transformers/tests/modeling_common_test.py:249: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py:493: in __call__ result = self.forward(*input, **kwargs) pytorch_transformers/modeling_transfo_xl.py:1253: in forward outputs = self._forward(input_ids, mems=mems, head_mask=head_mask) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ AttributeError: 'Tensor' object has no attribute 'bool' pytorch_transformers/modeling_transfo_xl.py:1145: AttributeError __________________ TransfoXLModelTest.test_transfo_xl_lm_head __________________ self = <pytorch_transformers.tests.modeling_transfo_xl_test.TransfoXLModelTest testMethod=test_transfo_xl_lm_head> def test_transfo_xl_lm_head(self): self.model_tester.set_seed() config_and_inputs = self.model_tester.prepare_config_and_inputs() > output_result = self.model_tester.create_transfo_xl_lm_head(*config_and_inputs) pytorch_transformers/tests/modeling_transfo_xl_test.py:201: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ pytorch_transformers/tests/modeling_transfo_xl_test.py:142: in create_transfo_xl_lm_head lm_logits_1, mems_1 = model(input_ids_1) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py:493: in __call__ result = self.forward(*input, **kwargs) pytorch_transformers/modeling_transfo_xl.py:1349: in forward transformer_outputs = self.transformer(input_ids, mems=mems, head_mask=head_mask) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py:493: in __call__ result = self.forward(*input, **kwargs) pytorch_transformers/modeling_transfo_xl.py:1253: in forward E AttributeError: 'Tensor' object has no attribute 'bool' pytorch_transformers/modeling_transfo_xl.py:1145: AttributeError =============================== warnings summary =============================== -- Docs: http://doc.pytest.org/en/latest/warnings.html ======= 5 failed, 206 passed, 10 skipped, 36 warnings in 171.71 seconds ======== ` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> Seamless execution!!! ## Environment ==============NVSMI LOG============== Timestamp : Sat Aug 31 11:09:33 2019 Driver Version : 418.67 CUDA Version : 10.1 Attached GPUs : 1 GPU 00000000:00:04.0 Product Name : Tesla K80 Product Brand : Tesla * OS:Ubuntu 18.04 * Python version:3.6 * PyTorch version:1.1.0 * PyTorch Transformers version (or branch):https://github.com/huggingface/pytorch-transformers * Using GPU ?yes * Distributed of parallel setup ?no * Any other relevant information: Why do these attribute errors occur?
08-31-2019 20:35:49
08-31-2019 20:35:49
Hi, could you try updating your pytorch version to 1.2.0 ?<|||||>Same issue here. Pytorch==1.2.0, python==3.6.2<|||||>What exactly is your issue @ukliu ?<|||||>> What exactly is your issue @ukliu ? I was going through the pytorch-transformers tutorial at https://github.com/ukliu/pytorch-transformers <img width="870" alt="Screen Shot 2019-09-30 at 3 48 10 PM" src="https://user-images.githubusercontent.com/14615401/65910949-bfdddb00-e399-11e9-9970-f73d8e6f388b.png"> All others seems fine, but TransfoXLModel gives an error of AttributeError: 'Tensor' object has no attribute 'bool'<|||||>That's mainly a pytorch version issue. You can upgrade your pytorch or change the type to torch.uint8 rather than call the .bool() function.
transformers
1,168
closed
How to add new pre-trained model pytorch-transformers
## ❓ Questions & Help Pytorch-transformers is a great library. I like that it does one thing, give access to the pre-trained SOTA models for NLP. I and my team we want to help and start contributing: - at first, want to add Polish BERT model like https://github.com/huggingface/pytorch-transformers/pull/688 But we do not know how to do this :( Is there any guide or procedure that shows what should be changed in order to add a new model? We would be grateful if someone guides us.
08-31-2019 20:09:23
08-31-2019 20:09:23
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,167
closed
ImportError: cannot import name 'DistilBertModel'
## 🐛 Bug Can you update the pypi package? I cannot import DistilBERT on pytorch_transformers==1.1.0
08-31-2019 18:43:53
08-31-2019 18:43:53
Yes, you should install it from source right now if you want to use DistilBERT. We're planning a new release 1.2.0 that includes DistilBERT + GPT-2 Large + XLM 100/17 sometimes this week :).
transformers
1,166
closed
Roberta for NER task
## ❓ Questions & Help Hello, is there a way to use Roberta model for NER task? Is there a script somewhere? Thank you.
08-31-2019 18:02:17
08-31-2019 18:02:17
Hi @militu you should take a look at this long thread discussing NER for BERT (should be the same for RoBERTa): https://github.com/huggingface/pytorch-transformers/issues/64<|||||>But there is not RobertaForTokenClassification and TFRobertaForTokenClassification like BertForTokenClassification and TFBertForTokenClassification. <|||||>Not yet indeed, do you want to submit a PR copying these models from Bert?<|||||>@thomwolf is this something the team is open to reviewing? I can open a PR that (ambitiously?) adds both `RobertaForTokenClassification` and `TFRobertaForTokenClassification` in the next few days/week.<|||||>Yes, sure (though I won't commit to a specific delay for reviewing hahaha). Adding `RobertaForTokenClassification` and `TFRobertaForTokenClassification` should be very simple and basically kept as a copy-past from Bert similar models. The most important here is actually to finish the PR adding token-to-string-character mappings (#1274 by @michaelrglass) so we can translate NER labels to token labels for training. Though I think there may also be a simpler way to do that by modifying RoBERTa/GPT-2 tokenizers to accept tokenized word but this require some knowledge of the internal functioning of GPT-2 tokenizer.<|||||>Created a PR and looking for feedback. Hoping to jump on the `run_ner.py` script as well tonight/tomorrow.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This was done and probably should be closed.
transformers
1,165
closed
Dependency errors when trying to use gpt2 using pytorch hub.
It started today, yesterday it was working fine. When I try to download `gpt2` model from pytorch hub repository, as follows: ```python torch.hub.load('huggingface/pytorch-pretrained-BERT', 'gpt2Tokenizer', 'gpt2') ``` I get the following error: `ModuleNotFoundError: No module named 'sacremoses'`. If I add that dependency manually then I get another error: `ModuleNotFoundError: No module named 'sentencepiece'`. Then I add `sentencepiece` dependency manually just to get another error: `RuntimeError: Cannot find callable gpt2Tokenizer in hubconf`. And this error seems to be related to API changes. I'm using a Google Colab GPU instance. If this is not the right place to post this issue, please re-direct me to the proper source to post the issue. Thanks.
08-31-2019 10:28:14
08-31-2019 10:28:14
I found that https://github.com/huggingface/pytorch-transformers/commit/256086bc6908448fc6aff9b1e19d95c4f6019bee is the source of the issue. Reading the changes I could guess that the new way for retrieving the tokenizer and model is as follows: ```python tokenizer = torch.hub.load('huggingface/pytorch-transformers', 'tokenizer', 'gpt2') model = torch.hub.load('huggingface/pytorch-transformers', 'modelWithLMHead', 'gpt2') ``` But I'm not sure if there is an issue with the docs in hub as they seem to not be updated. <|||||>Yes, we are in the process of updating the hub<|||||>I'm reopening this issue because I'm getting the next error when trying to import the tokenizer: `ImportError: cannot import name 'add_start_docstrings'` <img width="1046" alt="Screenshot 2019-09-06 at 20 15 36" src="https://user-images.githubusercontent.com/2614726/64450882-4da01080-d0e3-11e9-94d0-10a0e57c0e80.png"> <|||||>We can only help you if we have more information on the version/release you are using. On Fri, 6 Sep 2019 at 21:17, Víctor Albertos <[email protected]> wrote: > Reopened #1165 > <https://github.com/huggingface/pytorch-transformers/issues/1165>. > > — > You are receiving this because you commented. > > > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-transformers/issues/1165?email_source=notifications&email_token=ABYDIHJBMXLUIDWHL7OBEXTQIKNEXA5CNFSM4ISTI6HKYY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGOTPQM65I#event-2615201653>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/ABYDIHMPKQVNPQFHVBGEDWLQIKNEXANCNFSM4ISTI6HA> > . > <|||||>I'm using the version from the hub, you don't specify a version there, as far as I know. <|||||>It's the version of the master branch by default. I fixed the bug with ee027c8. Note that you can [specify a specific release](https://pytorch.org/docs/stable/hub.html#torch.hub.load) with torch hub, e.g. use release `1.2.0` with `torch.hub('huggingface/pytorch-transformers:1.2.0', 'model', 'bert-base-uncased')`. That's what I would advise as it allows you to have clean versioning of your code (you will be sure, in 3 months from now, of the exact version of the model you were using to get your results).<|||||>Thanks for fixing the bug so quickly and for the additional information I was not aware of the versioning feature.
transformers
1,164
closed
distillation: fix ModuleNotFoundError error in token counts script
Hi, I'm currently trying out "distillation" 😅 This PR a `ModuleNotFoundError` message in the `token_counts.py` script (same error was recently fixed in 803c1cc4eacd38f1b854578d7d717b5e4a1ada47 🤗
08-31-2019 10:24:35
08-31-2019 10:24:35
Great, thanks @stefan-it!
transformers
1,163
closed
[Help] how to make a constrained text generation
## ❓ Questions & Help <!-- A clear and concise description of the question. --> What I need is to make a constrained text generation via XLNet or GPT-2: Input: No one has the intention of building a wall. Constraint: the output should include two pre-defined key words: 'No one' and 'construct'. Expected output(e.g.): No one has the intention, a wall to construct. (with a text length being predefined). I found some reference like followings, https://awslabs.github.io/sockeye/inference.html#lexical-constraints but it is too complecate to transfer. Could u give me some advice? thx a log!
08-31-2019 08:02:09
08-31-2019 08:02:09
Yes, I think the sockeye paper and code is the right place to start even if it may look complicated at first sight. Try to combine it with the `run_generation.py` example.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> ## ❓ Questions & Help > What I need is to make a constrained text generation via XLNet or GPT-2: > > Input: No one has the intention of building a wall. > Constraint: the output should include two pre-defined key words: 'No one' and 'construct'. > Expected output(e.g.): No one has the intention, a wall to construct. > (with a text length being predefined). > > I found some reference like followings, > https://awslabs.github.io/sockeye/inference.html#lexical-constraints > > but it is too complecate to transfer. Could u give me some advice? > > thx a log! Hi have you successfully implemented this constrained generation method? Thanks a lot!
transformers
1,162
closed
XLNet bias fix on resize embeddings (cf #1124)
Fixed an issue where the linear layer bias wouldn't be resized along the weight resize when there was an embedding matrix resize with XLNet (cf #1124). This fix works for any model that needs to tie its weights between an embedding layer & a linear layer if . that linear layer has a bias.
08-31-2019 04:53:22
08-31-2019 04:53:22
transformers
1,161
closed
Large Memory Layers
## 🚀 Feature Implement models with Large Memory Layers from this paper: https://arxiv.org/pdf/1907.05242.pdf ## Motivation These models seem very promising.
08-30-2019 17:23:07
08-30-2019 17:23:07
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,160
closed
--seed does not change the fintuning results of the xlnet model
## 🐛 Bug <!-- Important information --> Model I am using (XLNet): Language I am using the model on (English): The problem arise when using: * [ ] the official example scripts gpu=3 seed=1 task=MRPC bsz=32 learning_rate=5e-5 max_steps=800 warmup_steps=200 save_steps=400 export CUDA_VISIBLE_DEVICES=${gpu} export GLUE_DIR=/home/zhaoguangxiang/bert/glue_data python3 ./examples/run_glue.py \ --model_type xlnet \ --model_name_or_path xlnet-large-cased \ --do_train \ --do_eval \ --task_name=${task} \ --data_dir=${GLUE_DIR}/${task} \ --output_dir=checkpoint/xl_${task}_seed${seed}/ \ --max_seq_length=128 \ --per_gpu_eval_batch_size=${bsz} \ --per_gpu_train_batch_size=${bsz} \ --gradient_accumulation_steps=1 \ --max_steps=${max_steps} \ --model_name=xlnet-large-cased \ --overwrite_output_dir \ --overwrite_cache \ --save_steps ${save_steps} \ --learning_rate ${learning_rate} \ --warmup_steps=${warmup_steps} The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) ## To Reproduce Steps to reproduce the behavior: 1. change the seed 2. results does not change, and keep acc=0.8774509803921569 for every seed ## Environment * OS:linux * Python version:3.6 * PyTorch version: 1.2 * PyTorch Transformers version (or branch): latest * Using GPU ? 1 * TITAN RTX
08-30-2019 17:09:28
08-30-2019 17:09:28
In the command line you are showing, you should add a `--seed ${seed}` argument to set the seed, otherwise, it will stay the same.<|||||>> In the command line you are showing, you should add a --seed ${seed} argument to set the seed, otherwise, it will stay the same. Sorry, i forgot it.
transformers
1,159
closed
Problem with optimizers after migration
## 📚 Migration <!-- Important information --> Model I am using (Bert, XLNet....): Bert Language I am using the model on (English, Chinese....): Russian The problem arise when using: The optimizers. I tried the default parameters from the example with max_grad_norm = 1.0 and lr = 2e-5. ``` warmup_proportion = float(num_warmup_steps) / float(num_total_steps) # 0.1 ### Previously BertAdam optimizer was instantiated like this: optimizer = BertAdam(model.parameters(), lr=lr, schedule='warmup_linear', warmup=warmup_proportion, t_total=num_total_steps) ### and used like this: for batch in train_data: loss = model(batch) loss.backward() optimizer.step() ### In PyTorch-Transformers, optimizer and schedules are splitted and instantiated like this: optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False) # To reproduce BertAdam specific behavior set correct_bias=False scheduler = WarmupLinearSchedule(optimizer, warmup_steps=num_warmup_steps, t_total=num_total_steps) # PyTorch scheduler ### and used like this: for batch in train_data: loss = model(batch) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # Gradient clipping is not in AdamW anymore (so you can use amp without issue) scheduler.step() optimizer.step() ``` The tasks I am working on is: * [ ] my own task or dataset: private dataset on comments from client support Details of the issue: After migration I see that the model converges more slowly and failed to obtain such an accuracy as it was before. In my multi-class classification dataset I get about 0.59 accuracy while the previous version resulted in about 0.63 accuracy after convergence. Do the optimizers in both versions are equivalent? If not, how can I make them absolutely the same? ## Environment * OS: Ubuntu * Python version: 3.6 * PyTorch version: 1.0 * PyTorch Transformers version (or branch): 1.0 * Using GPU ? Yes * Distributed of parallel setup ? No
08-30-2019 14:59:43
08-30-2019 14:59:43
You should get the same behavior than `BertAdam` by setting `correct_bias=False` in `AdamW` and using the `WarmupLinearSchedule` together with it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,158
closed
regarding #1026 pull request
Dear Thomas, This is regarding my #1026 pull request, here is my understanding of the reproducibility issue I was getting: - on line 451, in the codes tokenizer is reloaded without setting do_lower_case, then if you use both do_train+do_eval, you will get different results than if you do do_eval only on the same directory since if you use only do_eval only, tokenizer is read from line 408 where do_lower_case is considered. - the second issue I see is that if you do both do_train and do_eval you read tokenizer from the output_dir, but if you do only do_eval you read tokenizer from args.model_name_or_path which can be different and could results in different results, so this is better to reload the tokenizer once from output_dir during the evaluation and remove it from training part. thanks. Best regards, Rabeeh
08-30-2019 13:48:25
08-30-2019 13:48:25
Oh I see what you mean, indeed that's a more general issue with saving and loading tokenizer with specific configuration parameters. This is actually also relevant to our work on XLM's tokenizer in #1092<|||||>Dear Thomas, The pull request #1026 does not work unfortunately when using eval_all_check_points, and I was wondering if you could undo that merge, sorry for this, this new pull request here works for me. thanks. <|||||>Ok let's do that for now and I'll think about a more general way to save tokenizer configurations.<|||||>awesome. thanks <|||||># [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=h1) Report > Merging [#1158](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/e0caab0cf052c86e456bc4b4fdac5788433ed935?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1158 +/- ## ====================================== Coverage 80.7% 80.7% ====================================== Files 46 46 Lines 7411 7411 ====================================== Hits 5981 5981 Misses 1430 1430 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=footer). Last update [e0caab0...0a2fecd](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||># [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=h1) Report > Merging [#1158](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/e0caab0cf052c86e456bc4b4fdac5788433ed935?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1158 +/- ## ====================================== Coverage 80.7% 80.7% ====================================== Files 46 46 Lines 7411 7411 ====================================== Hits 5981 5981 Misses 1430 1430 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=footer). Last update [e0caab0...0a2fecd](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Addressing this up-stream with #1092
transformers
1,157
closed
How to load pretraind XLM model
## ❓ Questions & Help Facebook recently released a new pre-trained language model (17 and 100) (https://github.com/facebookresearch/XLM#pretrained-cross-lingual-language-models) I want to load 17 language model. Could someone guide me on how to achieve this? The straightforward way didn't work for me. I have downloaded the 3 files listed on facebook GitHub: - model - https://dl.fbaipublicfiles.com/XLM/mlm_17_1280.pth - bpe codes - https://dl.fbaipublicfiles.com/XLM/codes_xnli_17 - vocabulary - https://dl.fbaipublicfiles.com/XLM/vocab_xnli_17 and saved them in folder '/home/ksopyla/xlm/mlm17l'. Then I have tried to load the model with XLMModel.form_pretrained function ` model = XLMTokenizer.from_pretrained('/home/ksopyla/xlm/mlm17/') ` got ` Model name '/home/ksopyla/xlm/mlm17' was not found in model name list (xlm-mlm-en-2048, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024, xlm-mlm-tlm-xnli15-1024, xlm-mlm-xnli15-1024, xlm-clm-enfr-1024, xlm-clm-ende-1024). We assumed '/home/ksopyla/xlm/mlm17/config.json' was a path or url but couldn't find any file associated to this path or url. Traceback (most recent call last): File "/home/ksopyla/.vscode/extensions/ms-python.python-2019.8.30787/pythonFiles/ptvsd_launcher.py", line 43, in <module> main(ptvsdArgs) File "/home/ksopyla/.vscode/extensions/ms-python.python-2019.8.30787/pythonFiles/lib/python/ptvsd/__main__.py", line 432, in main run() File "/home/ksopyla/.vscode/extensions/ms-python.python-2019.8.30787/pythonFiles/lib/python/ptvsd/__main__.py", line 316, in run_file runpy.run_path(target, run_name='__main__') File "/home/ksopyla/.pyenv/versions/3.7.3/lib/python3.7/runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "/home/ksopyla/.pyenv/versions/3.7.3/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/home/ksopyla/.pyenv/versions/3.7.3/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/ksopyla/dev/document_embeddings/xlm_from_pretraind.py", line 34, in <module> output_hidden_states=True, File "/home/ksopyla/.local/share/virtualenvs/szrek-data-PaoX74GN/lib/python3.7/site-packages/pytorch_transformers/modeling_utils.py", line 430, in from_pretrained **kwargs TypeError: cannot unpack non-iterable NoneType object ` I suspect that I should change the file names and adjust the vocab file format. But I can't find it in the documentation.
08-30-2019 13:43:59
08-30-2019 13:43:59
~~Hello, we haven't yet converted those models and hosted them on our S3, but you indeed should be able to do it yourself; we used [this script](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/convert_xlm_checkpoint_to_pytorch.py) for the other XLM checkpoints, you could use it to convert this checkpoint.~~ The models are now available on our S3. You should upgrade your `pytorch-transformers` version to the current source (master), you can then load your model with: ```py from pytorch_transformers import XLMModel model = XLMModel.from_pretrained("xlm-mlm-17-1280") # or model = XLMModel.from_pretrained("xlm-mlm-100-1280") ```<|||||>Wow. You are fast :) Thank you.
transformers
1,156
closed
About distilled the SQuAD?
## ❓ Questions & Help Thank you for your excellent work. I want to know that have you released the distilled model code on SQuAD dataset? And how to set the max sequence length on teacher model and student model? Are they the same length?
08-30-2019 09:24:11
08-30-2019 09:24:11
i have the same question. how to deal with the max sequence length between teacher model and student model<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,155
closed
Update apex fp16 implementation
As the issue I raised here: https://github.com/huggingface/pytorch-transformers/issues/1143 I updated the implementation of apex fp16, following latest Apex version's document.
08-30-2019 05:08:00
08-30-2019 05:08:00
Thanks, these examples will probably be deprecated and replaced by the new more general `run_lm_finetuning` example which can train several models with normal and masked language modeling.
transformers
1,154
closed
fix: hard coding for max number
fp16 max number is 65504, the original 1e30 will cause Nan in fp16
08-30-2019 04:17:14
08-30-2019 04:17:14
Yes, thanks @ziliwang
transformers
1,153
closed
[WIP] Refactor Tokenizers creation to support in-memory initialization
As pointed out in #916 currently tokenizers ask for the path from where they'll load the required vocabulary files. This PR allows tokenizers to take their vocab from data living in-memory and cold-storage. Implementations details: - All tokenizer now have a specific ${TokenizerName}Vocab dataclass holding all the required information to run the model. - All ${TokenizerName}Vocab dataclass provide a from_pretrained method in charge of reading necessary files - All tokenizer now take as first argument vocabs which has to ${TokenizerName}Vocab instance. - All model now have a static member vocab_class which points to the desired ${TokenizerName}Vocab data class - Some ${TokenizerName}Vocab.from_pretrained share loading routines and thus code is currently duplicated across all of them. It might be possible to refactor to use a generic method that handles such loading. - [x] Bert - [x] Transformer XL - [x] GPT - [x] GPT-2 - [x] XLNet - [x] XLM - [x] RoBERTa - [x] DistilBERT
08-29-2019 21:49:11
08-29-2019 21:49:11
@thomwolf cc @honnibal Drafting this PR to have dedicated space for discussions.<|||||>@thomwolf @honnibal Can you have a look plz :) ?<|||||>Ok, I went through this PR and it looks nice. Great job @mfuntowicz. No problem for the slight code duplication in the tokenizer loading classes, as you've noticed, the repo's philosophy is rather pragmatic and we only add abstractions when they are needed for easier code maintenance and added functionalities. Thanks for following the general organization of the repo as well.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,152
closed
fix adding special tokens
Currently there is a bug when adding `additional_special_tokens` in the form of a tuple, instead of a list. To reproduce: ```python from pytorch_transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") tokenizer.add_special_tokens({"additional_special_tokens": ("@a@", "@b@")}) tokenizer.all_special_tokens ``` Results in: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-4-81549ce398a5> in <module>() ----> 1 tokenizer.all_special_tokens ~/GitHub/pytorch-transformers/pytorch_transformers/tokenization_utils.py in all_special_tokens(self) 677 set_attr = self.special_tokens_map 678 for attr_value in set_attr.values(): --> 679 all_toks = all_toks + (attr_value if isinstance(attr_val ue, (list, tuple)) else [attr_value]) 680 all_toks = list(set(all_toks)) 681 return all_toks TypeError: can only concatenate list (not "tuple") to list ```
08-29-2019 20:48:39
08-29-2019 20:48:39
Look good to me (the failing test on `head_masking` is not related to this PR). Thanks @epwalsh!
transformers
1,151
closed
Idea to improve DistilBERT
## 🚀 Feature <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Based on https://medium.com/huggingface/distilbert-8cf3380435b5, you are using KL_loss to train the student from the teacher. You can do something a little bit different there. Loss = KL_loss(teacher, student when teacher is right) and CE(when the teacher is wrong). Therefore, the student should converge to be better than the teacher by no learning from its mistake. Or even better, you could correct the teacher by artificially telling him the GT; By replacing in the teacher prediction the GT class probability value by 1 and then renormalizing by the sum of probs (here: some of sort of artificial smooth). ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> Improve DistillBert performance. ## Additional context <!-- Add any other context or screenshots about the feature request here. -->
08-29-2019 20:20:22
08-29-2019 20:20:22
good idea. i aggree with the second plan<|||||>If you give it a try, please tell me in the loop : [email protected]<|||||>There is Albert now https://arxiv.org/abs/1909.11942 which seems to be even better. It isn't based on KD.<|||||>anthor solution is here. [https://github.com/intersun/PKD-for-BERT-Model-Compression](url). Here is a awesome about distillaiton: [https://github.com/dkozlov/awesome-knowledge-distillation](url)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,150
closed
What is the relationship between `run_lm_finetuning.py` and the scripts in `lm_finetuning`?
## ❓ Questions & Help It looks like there are now two scripts for running LM fine-tuning. While `run_lm_finetuning` seems to be newer, the documentation in `lm_finetuning` seems to indicate that there is more subtlety to generating the right data for performing LM fine-tuning in the BERT format. Does the new script take this into account? Sorry if I'm missing something obvious!
08-29-2019 18:15:45
08-29-2019 18:15:45
Hi! The folder `lm_finetuning` is especially targeted at BERT. It gives details on two different losses that were used to pre-train BERT: the masked language modeling objective (MLM) and the next sentence prediction objective (NSP). It gives several insights to BERT's fine-tuning. The file `run_lm_finetuning`, on the other hand, showcases how to fine-tune language modeling on several models: BERT, GPT, GPT-2, and RoBERTa. It only uses a single objective; MLM for BERT and RoBERTa and CLM (causal language modeling) for GPT and GPT-2.<|||||>How can we fine-tune on the next sentence prediction task? I did not find the `lm_finetuning` files. Thank you.<|||||>Hi @JiajunBao, these scripts were community maintained and have since been removed. We do not have any script working on the next sentence prediction task. I believe the `lm_finetuning` files were last up to date in 1.1.0, so you may look [here](https://github.com/huggingface/transformers/tree/1.1.0/examples/lm_finetuning).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,149
closed
Closing bracket is missing in token_counts.py for DistilBERT
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): DistilBERT The problem arise when using: * [x] the official example scripts: (give details) I'm trying DistilBERT, and get an "invalid syntax error" when I run `examples/distillation/scripts/token_counts.py`. A closing bracket seems missing at line 27. ``` parser.add_argument("--data_file", type=str, default="data/dump.bert-base-uncased.pickle", help="The binarized dataset." ``` ## Environment * OS: Linux * Python version: 3.6.9 * PyTorch version: 1.2.0 * PyTorch Transformers version (or branch): 1.1.0 * Using GPU ? yes * Distributed of parallel setup ? no * Any other relevant information:
08-29-2019 16:23:57
08-29-2019 16:23:57
Hi! Indeed, the closing bracket was missing, fixed it with caf1d11! Thanks for the bug report :).
transformers
1,148
closed
Documentation auto-deploy
Documentation is now deployed automatically. @thomwolf @julien-c
08-29-2019 16:15:25
08-29-2019 16:15:25
Great, thanks @LysandreJik!
transformers
1,147
closed
GPT2-large fails to load the tokenizer
## 🐛 Bug Using: GPT2-Large ## To Reproduce When I load the gpt2-large model, in the same way as gpt2 and gpt2-medium I get an NoneType when loading the tokenizer. ``` self.tokenizer = GPT2Tokenizer.from_pretrained("gpt2-large", bos_token="_start_", unk_token='_unk_', eos_token="_eos_", sep_token="_delimiter_", cls_token="_classify_", pad_token='_pad_' ) self.model = GPT2LMHeadModel.from_pretrained("gpt2-large") ``` At this point in the debugger self.tokenizer == None is True The issue is clear when I try to use the tokenizer. ``` File "gpt2_train.py", line 70, in load num_added_toks = self.tokenizer.add_special_tokens(special_tokens_dict) AttributeError: 'NoneType' object has no attribute 'add_special_tokens' ``` ## Expected behavior Get a tokenizer object ## Environment * OS: Linux * Python version: 3.7.3 * PyTorch version: 1.1.0 * PyTorch Transformers version (or branch): 1.1.0 (last github pull) * Using GPU ? yes * Distributed of parallel setup ? no * Any other relevant information:
08-29-2019 15:31:08
08-29-2019 15:31:08
Hello! Could you try to install from source (1.2.0)? Is there any warning in your terminal such as ``` Model name 'gpt2-large' was not found in model name list (gpt2, gpt2-medium). We assumed 'gpt2-large' was a path or url but couldn't... ``` or something along those lines?<|||||>Thank you! Solved with the update. And yes, checking again I got your mentioned error message.
transformers
1,146
closed
Attention values occasionally exceed 1 in BertModel
```Python outputs = self.model(x, attention_mask = x_mask) # Models outputs are now tuples print(outputs[2][-1].max()) print((outputs[2][-1]>1).sum().item()) # Number of attention values > 1 print((outputs[2][-1]>-1).sum().item()) # Total number of attention values ``` ``` tensor(1.0750, device='cuda:7', grad_fn=<MaxBackward1>) 1545 480000 ```
08-29-2019 13:40:30
08-29-2019 13:40:30
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@thomwolf Did someone looking into this issue?<|||||>No and I'm afraid we don't really have the bandwidth for that at the moment. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I think this is related to a feature of Dropout applied to the attention values. Dropout will scale up the values that are not zeroed out which causes the problem you described. See this [pytorch/issues/5752](https://github.com/pytorch/pytorch/issues/5752). Setting the model to eval mode should produce normal attention values.
transformers
1,145
closed
How to finetune GPT2
## ❓ Questions & Help Hi all, I would like to finetune the pretrained gpt2 model with a newspapers dataset. Do you know how would that be possible? I haven't found any train scipt for gpt2. Thanks a lot.
08-29-2019 12:32:40
08-29-2019 12:32:40
Hi, we have an example to fine-tune several models on [language modeling here](https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_lm_finetuning.py). You can look into GPT-2's training on the CLM task, which is done on WikiText-2 in this example.<|||||>@LysandreJik would you please provide an example of usage? In the code you mentioned WikiText-2 only in doctoring. I believe this input file is a text file without any new line, right? Can't we pass an input file, with one sentence per line?<|||||>Good catch, it was initially made for WikiText-2 but it was generalized to be used with any text file. ~I'll add an example of usage shortly in our Documentation section.~ An example is now available in the [documentation](https://huggingface.co/pytorch-transformers/examples.html#causal-lm-fine-tuning-on-gpt-gpt-2-masked-lm-fine-tuning-on-bert-roberta). You can run it like so: ```bash python run_lm_finetuning.py \ --train_data_file=$TEXT_FILE \ --output_dir=$OUTPUT_DIRECTORY \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --do_train ``` You don't need to remove any newline in your text file, it all depends on what you're looking for. If you're keeping the line returns, the model will learn to generate line returns as well. You can easily change the way the model inputs are built by changing the `TextDataset` class. Right now, with: ```py while len(tokenized_text) >= block_size: # Truncate in block of block_size self.examples.append(tokenizer.add_special_tokens_single_sentence(tokenized_text[:block_size])) tokenized_text = tokenized_text[block_size:] ``` We are simply creating token lists (of size `block_size`) that will then be fed to the model. We are not doing any special preprocessing (such as removing the line returns).<|||||>@LysandreJik Great thanks. The current version of ```TextDataset``` class will concat text from different articles (if any) together, right? I mean there is no notion of separate documents (articles) and it's all a continious collection of tokens? <|||||>That's true. If you're looking to get the best prediction out of it, you should be careful that unrelated pieces of text are not concatenated in a single input. We didn't do it in that example for simplicity's sake.<|||||>@LysandreJik in Line 76 of the code: ``` self.examples.append(tokenizer.add_special_tokens_single_sentence(tokenized_text[:block_size])) ``` If models other than Bert is used, then the tokenizer does not make use of special tokens, right? It is only applicable for Bert<|||||>Both BERT and RoBERTa use special tokens. For GPT and GPT-2, no special token will be added using this method, since, as you said, they do not make use of special tokens.<|||||>In the code you mentioned that we might want to add model specific padding. I wonder if got-2 has padding implemented? if yes, does it accept right-side zero padding similar to BERT? I want to finetune gpt-2 on a dataset which each instance length is generally less than 65 tokens, I want to make all the same length by adding 0 padding up to max_length of 128. any idea?<|||||>How we can add a [CLS] token to beginning of every inputs for gpt2 (and add it to vocabulary) and fine-tune it? I see an example of adding [CLS] in ```modeling_gpt2.py``` for the ```GPT2DoubleHeadsModel``` class. I wonder if we can finetune gpt2 with added [CLS] token?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> In the code you mentioned that we might want to add model specific padding. I wonder if got-2 has padding implemented? if yes, does it accept right-side zero padding similar to BERT? > I want to finetune gpt-2 on a dataset which each instance length is generally less than 65 tokens, I want to make all the same length by adding 0 padding up to max_length of 128. > any idea? I think you can use ANY tokens for padding as GPT-2 is causal. You just need to mask out these positions when calculating loss.<|||||>https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_lm_finetuning.py : this link doesn't seem to exist anymore? How do I finetune a GPT-2 on my custom data?<|||||>@y12uc231 The examples folder was reorganized to group by framework and task. You can now find examples for finetuning pytorch models on language modeling tasks [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). As the README notes, legacy scripts can be found [here](https://github.com/huggingface/transformers/tree/main/examples/legacy). <|||||>Sounds great, thanks! When I was trying to use the script above there is an option that says "--model_type MODEL_TYPE If training from scratch..", does it train the model from scratch or only finetunes it?<|||||>In the [legacy language modeling script](https://github.com/huggingface/transformers/blob/main/examples/legacy/run_language_modeling.py), to finetune, pass the checkpoint you wish to use with the `model_name_or_path` option. To train from scratch use the `model_type` option and leave `model_name_or_path` as `None`.
transformers
1,144
closed
where can i assign step in function lr_lambda of Class WramupLinearSchedule?
As i know LambdaLR's get_lr function recieve last_epoch as parameter, so where does it get the step of current training? class WarmupLinearSchedule(LambdaLR): """ Linear warmup and then linear decay. Linearly increases learning rate from 0 to 1 over `warmup_steps` training steps. Linearly decreases learning rate from 1. to 0. over remaining `t_total - warmup_steps` steps. """ def __init__(self, optimizer, warmup_steps, t_total, last_epoch=-1): self.warmup_steps = warmup_steps self.t_total = t_total super(WarmupLinearSchedule, self).__init__(optimizer, self.lr_lambda, last_epoch=last_epoch) def lr_lambda(self, step): if step < self.warmup_steps: return float(step) / float(max(1, self.warmup_steps)) return max(0.0, float(self.t_total - step) / float(max(1.0, self.t_total - self.warmup_steps)))
08-29-2019 10:45:10
08-29-2019 10:45:10
Hi, to use a scheduler you have to tell it when to perform an optimization step, as detailed on the [pytorch documentation](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate). It increases the step by one every time you call `scheduler.step()`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,143
closed
Why still using old implementation of apex fp16
## 🚀 Feature According to Nvidia apex fp16 documentary: https://nvidia.github.io/apex/amp.html The implementation of new apex version doesn't require using FP16_Optimizer wrapping over FusedAdam. So I wonder why the team still kept the old implementation. Is there any special reason ? If not, I will make a pull request of implementation due to new apex version
08-29-2019 10:44:12
08-29-2019 10:44:12
AFAIK no particular reason, feel free to open a PR
transformers
1,142
closed
FP16_Optimizer is not an Optimizer when fp_16
## 🐛 Bug When using fp_16, the scheduler should be: scheduler = WarmupLinearSchedule(optimizer.optimizer, warmup_steps=args.warmup_steps, t_total=num_train_optimization_steps) if not, that will be "TypeError: FP16_Optimizer is not an Optimizer"
08-29-2019 10:03:42
08-29-2019 10:03:42
I think this issue should be fixed on master (even though I'm not exactly sure which script you are referring to).<|||||>I already fixed this in my pull request. This error is caused by FP16_Optimizer, by using new Apex implementation, we deprecated FP16_Optimizer, so this bug is no longer issued<|||||>Ok closing the issue then, thanks.
transformers
1,141
closed
Small modification of comment in the run_glue.py example
Add RoBERTa to the comment as it was not explicit that RoBERTa don't use token_type_ids.
08-29-2019 07:55:20
08-29-2019 07:55:20
Indeed, missed that one, thank you.
transformers
1,140
closed
Can't Using Binarization Script for DistilBERT
## 🐛 Bug <!-- Important information --> I'm currently using DistilBERT and running into issues when I run scripts/binarized_data.py. I get the following error: Traceback (most recent call last): File "scripts/binarized_data.py", line 25, in <module> from ..utils import logger ValueError: attempted relative import beyond top-level package I haven't modified anything within the package.
08-29-2019 07:20:18
08-29-2019 07:20:18
Hello @SreeramV181, it should be fixed in commit 803c1cc4eacd38f1b854578d7d717b5e4a1ada47. Thanks for pointing that out! Victor
transformers
1,139
closed
Need multiple capabilities
Need both Generative Finetuning and Distilling Capabilities
08-29-2019 06:39:31
08-29-2019 06:39:31
# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=h1) Report > Merging [#1139](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=desc) into [generative-finetuning](https://codecov.io/gh/huggingface/pytorch-transformers/commit/529a16dec6cc9bfcf8954a1b16546960f2fab6fa?src=pr&el=desc) will **increase** coverage by `0.87%`. > The diff coverage is `96.42%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## generative-finetuning #1139 +/- ## ========================================================= + Coverage 79.61% 80.48% +0.87% ========================================================= Files 42 46 +4 Lines 6918 7411 +493 ========================================================= + Hits 5508 5965 +457 - Misses 1410 1446 +36 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `57.12% <ø> (-0.42%)` | :arrow_down: | | [pytorch\_transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3JvYmVydGEucHk=) | `95.37% <ø> (-0.93%)` | :arrow_down: | | [pytorch\_transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfcm9iZXJ0YS5weQ==) | `75.89% <ø> (ø)` | :arrow_up: | | [pytorch\_transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdXRpbHMucHk=) | `82.14% <ø> (-1.28%)` | :arrow_down: | | [pytorch\_transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `75.84% <ø> (ø)` | :arrow_up: | | [pytorch\_transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxtLnB5) | `86.66% <ø> (ø)` | :arrow_up: | | [pytorch\_transformers/tokenization\_distilbert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100% <100%> (ø)` | | | [pytorch\_transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxuZXQucHk=) | `79.01% <100%> (ø)` | :arrow_up: | | [pytorch\_transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `87.98% <100%> (ø)` | :arrow_up: | | [...torch\_transformers/tests/tokenization\_bert\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX2JlcnRfdGVzdC5weQ==) | `98.66% <100%> (ø)` | :arrow_up: | | ... and [20 more](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=footer). Last update [529a16d...e0caab0](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
1,138
closed
loss explosion
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I am using Bert-CRF model to do Named Entity Recognition task. I am using the average of the last four layers as the input of CRF model. But the loss will increase and become Nan in a few batches. Anyone meet that problem before? Any suggestions will be appreciated!
08-29-2019 04:45:34
08-29-2019 04:45:34
just because of a large learning rate
transformers
1,137
closed
Cannot import DistilBert classes
Tried installing from the master, and couldn't do it. ![image](https://user-images.githubusercontent.com/347398/63883240-91c73e80-c988-11e9-9a22-241ac48479c3.png)
08-28-2019 18:41:09
08-28-2019 18:41:09
My bad. Forgot to ignore pip cache ``` pip install git+https://github.com/huggingface/pytorch-transformers --no-cache-dir ```
transformers
1,136
closed
swap order of optimizer.step() and scheduler.step()
The current code results in the following warning: ``` UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) ```
08-28-2019 17:21:40
08-28-2019 17:21:40
indeed, thanks @adai183 <|||||>🤗
transformers
1,135
closed
distilbert: fix number of hidden_size
Hi, this PR corrects the return value of the `hidden_size` function (which should be the dimension size, as it is used in all other models) :)
08-28-2019 16:10:27
08-28-2019 16:10:27
CI fails related or unrelated 🤔<|||||>Yes, good catch @stefan-it! Thanks
transformers
1,134
closed
Schedulers cause memory accumulation across folds in cross-validation?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I am facing a strange issue when using the schedulers available in this library within a cross-validation loop. Basically, in each fold, I initialize a new model, optimizer, and scheduler. GPU memory accumulates until I eventually get a CUDA out of memory issue. The simplest example I could come up with to reproduce the error is: ```python import torch from pytorch_transformers import WarmupConstantSchedule, WarmupCosineSchedule, WarmupLinearSchedule, WarmupCosineWithHardRestartsSchedule # In my actual project, this is a for loop over the k-folds of k-fold cross-validation. # In this example I use a while just to demonstrate the OOM error. while True: net = torch.nn.Linear(10000, 10000) net = net.cuda() optimizer = torch.optim.Adam(net.parameters(), lr=1e-3) scheduler = WarmupCosineWithHardRestartsSchedule(optimizer, 1, 1000) # I also tried all the other schedulers. Same issue. # scheduler = WarmupConstantSchedule(optimizer, 1) # scheduler = WarmupCosineSchedule(optimizer, 1, 1000) # scheduler = WarmupLinearSchedule(optimizer, 1, 1000) del net, optimizer, scheduler ``` This will run until it (very quickly) uses up all 12GB on my Titan XP GPU. To make sure it was truly the initialization of the scheduler, I also tested ```python import torch from pytorch_transformers import WarmupCosineWithHardRestartsSchedule while True: net = torch.nn.Linear(10000, 10000) net = net.cuda() optimizer = torch.optim.Adam(net.parameters(), lr=1e-3) del net, optimizer ``` And did not see the memory accumulation or OOM error. My question(s) is/are: - Is this a known problem? - Am I doing something dumb? - How might I use a new scheduler for each fold of k-fold cross-validation in a way that doesn't lead to this issue? Thanks a lot.
08-28-2019 15:45:53
08-28-2019 15:45:53
I am facing the same issue.When I use the WarmupLinearSchedule and the 7th epoch training , I get a CUDA out of memory issue<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Running `import gc`, then`gc.collect()` and emptying the GPU’s cache should solve the issue temporarily. See #1742
transformers
1,133
closed
GPT2 Tokenizer decoding fails when the added tokens include a space
## 🐛 Bug After adding a new token that contains a space to the GPT2 tokenizer, the tokenizer produces an error at decoding time (see example code below). My current workaround is to preprocess that token to remove spaces before adding it and to postprocess the token after decoding. But I thought I'd share this in case this is something that the library can warn against (e.g. added tokens should not include spaces) or even support. <!-- Important information --> Model I am using (Bert, XLNet....): GPT2 Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [x] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. Run the following code: ```python from pytorch_transformers.tokenization_gpt2 import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained('gpt2') tokenizer.add_tokens(["special token"]) encoded = tokenizer.encode("special token") tokenizer.decode(encoded) ``` 2. Currently, I get the error: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-5-f47101f92e14> in <module> ----> 1 tokenizer.decode(encoded) ~/miniconda3/lib/python3.7/site-packages/pytorch_transformers/tokenization_utils.py in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces) 665 token_ids, skip_special_tokens=skip_special_tokens 666 ) --> 667 text = self.convert_tokens_to_string(filtered_tokens) 668 if clean_up_tokenization_spaces: 669 text = self.clean_up_tokenization(text) ~/miniconda3/lib/python3.7/site-packages/pytorch_transformers/tokenization_gpt2.py in convert_tokens_to_string(self, tokens) 187 """ Converts a sequence of tokens (string) in a single string. """ 188 text = ''.join(tokens) --> 189 text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors=self.errors) 190 return text 191 ~/miniconda3/lib/python3.7/site-packages/pytorch_transformers/tokenization_gpt2.py in <listcomp>(.0) 187 """ Converts a sequence of tokens (string) in a single string. """ 188 text = ''.join(tokens) --> 189 text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors=self.errors) 190 return text 191 KeyError: ' ' ``` ## Expected behavior I expect the decoder to return the string `"special token"` ## Environment * OS: OSX * Python version: 3.7.3 * PyTorch version: 1.1.0 * PyTorch Transformers version (or branch): master (d06c5a2a0acd8525d969a8f8f5b968ec0ec110b4) * Using GPU ? No * Distributed of parallel setup ? No * Any other relevant information:
08-28-2019 13:19:20
08-28-2019 13:19:20
I can second this, I am seeing the same error<|||||>Indeed, there is a mismatch between added tokens and byte-level BPE tokens here. Fixing it with #1174.<|||||>Found that you can replace the space with `Ġ`. `Ċ` can replace `\n`.
transformers
1,132
closed
How to split consecutive numbers?
## ❓ Questions & Help &emsp;&emsp; In some NER datasets of BIO(ES) format, each number of a consecutive number string labeled with a corresponding tag, e.g., "All Jhon need is only 10 yuan" will be labeled as "O, PER, O, O, O, O, O, O". In this case, "10" is labeled as "O, O". But in **BertTokenizer** and **PreTrainedTokenizer**, I don't find(or know?) effective params to deal with this situation. <!-- A clear and concise description of the question. -->
08-28-2019 10:07:06
08-28-2019 10:07:06
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,131
closed
mems output in XLNet
## 🐛 Bug Hi, I am trying to use memory of the last forward pass (mems arg) with XLNet. I am getting a tuple of None as mems output instead of a tuple of tensor. The same code with TransformerXL is running fine. Am I doing anything wrong or is this a bug ? Below is a short code snippet to reproduce the error. Many thanks, A ```python import torch from pytorch_transformers import TransfoXLTokenizer, TransfoXLModel, XLNetTokenizer, XLNetModel text = ['This is the first sentence. ', 'And this is another one'] # transformer-XL tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103') model = TransfoXLModel.from_pretrained('transfo-xl-wt103') mems = None for i in range(2): input_ids = torch.tensor(tokenizer.encode(text[i])).unsqueeze(0) outputs = model(input_ids, mems=mems) mems = outputs[1] # RUNS OK # XLNet tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased') model = XLNetModel.from_pretrained('xlnet-large-cased') mems = None for i in range(2): input_ids = torch.tensor(tokenizer.encode(text[i])).unsqueeze(0) outputs = model(input_ids, mems=mems) mems = outputs[1] # We get tuple of None in first model output, second forward crashes. # File "/home/asors/anaconda3/envs/psco/lib/python3.7/site-packages/pytorch_transformers/modeling_xlnet.py", line 858, in forward # mlen = mems[0].shape[0] if mems is not None else 0 ``` <!-- Important information --> Model I am using: XLNet Language I am using the model on (English, Chinese....): * OS: Ubuntu * Python version: 3.7 * PyTorch version: 1.1.0 * PyTorch Transformers version (or branch): 1.1.0
08-28-2019 09:29:10
08-28-2019 09:29:10
Indeed I see, we have to add: - an explanation in XLM's docstring that you should set the model configuration `mem_len` parameter if you want to use the memory (the answer to your main question). You can do for instance `model = XLNetModel.from_pretrained('xlnet-large-cased', mem_len=1024)` if you want a max memory of 1024 tokens. By default the model doesn't use memory (`mem_len = None`) - a check in the code to avoid the error you are reporting.<|||||>Thanks for your reply and the new doc!
transformers
1,130
closed
Output of BertModel does not match fixed feature vectors extracted from the last hidden layer
I want to finetune bert in my task. However, the bert ouput does not match the fixed feature vectors. I extracted the fixed feature vectors like this: I use `extract_features.py` to extract the fixed feature vectors of last hidden layer (layer -1). The command line is below: `python extract_features.py --input_file=input.txt --output_file=output.json --vocab_file=model_path/vocab.txt --bert_config_file=model_path/bert_config.json --init_checkpoint=model_path/bert_model.ckpt --layers=-1 --max_seq_length=128 --batch_size=8` The bert model output extracted like below: The load model code is like below: `model_dict = model.state_dict()` `#load pretrained released bert model` `pretrained_dict = torch.load( 'pytorch_model.bin')` `pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}` `#update params` `model_dict.update(pretrained_dict)` `model.load_state_dict(model_dict)` The init model code is like below: `class A(nn.Module):` ` def __init__(self):` ` self.config = BertConfig.from_json_file('config.json')` ` self.bert = BertModel(self.config)` `def inference(self, input_ids):` ` all_encoder_layers, _ = self.bert(input_ids, token_type_ids=None, attention_mask=input_mask)` `return all_encoder_layers[-1]` (I've omitted some irrelevant code.) Then I ouput tensor all_encoder_layers[-1]. all_encoder_layers[-1] doesn't match the feature vectors extracted by `extract_features.py` . I checked the params in my model. Bert params have been loaded and it is consistent with pretrained params. Also. the input sequence is consistent. Can anybody help me? Is there any settings I forget?
08-28-2019 09:25:08
08-28-2019 09:25:08
Hi, have you solved this problem? And does anyone know the order of all_encoder_layers? Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,129
closed
Fine-tuning (BERT & RoBERTa) base outperforms large
## ❓ Questions & Help On all datasets I have used so far, base always outperforms large after the fine-tune. This is true for both BERT and RoBERTa. Why is that? Am I doing something wrong? Does large require far more epochs to train or a different learning rate?
08-28-2019 09:18:14
08-28-2019 09:18:14
Maybe you need more GPUs and a bigger batch size<|||||>I have 8 GPUs 2080 rtx, each with 10gb of data. But yeah I use a batch size of 4. However, I need the sequence length of 512 so it is impossible to increase the batch size.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,128
closed
cannot import name 'RobertaConfig
## 🐛 Bug When I run run_glue.py with the roberta model I get an ImportError: cannot import name 'RobertaConfig' I can't run the run_glue.py with any model since it cannot import the RobertaConfig on line 34. Any ideas why?
08-28-2019 08:26:49
08-28-2019 08:26:49
Needed to update the pip package... <|||||>which pip package did you have to update? Same problem here... <|||||>@stellaywu what is your current `transformers` or `pytorch-transformers` version?<|||||>@LysandreJik it's transformer 2.0.0 <|||||>Thatś weird, transformers 2.0.0 works on a clean install in my environment. Could you please double check that the python code you´re running is in the same environment? Something like this: ```py import transformers print(transformers.__version__) # ´2.0.0´ print(tranformers.RobertaConfig) # Does it crash with ´AttributeError: module ´transformers´ has no attribute RobertaConfig´ ? ```<|||||>you are right, it doesn't. I have probably mixed up environment. Thanks!<|||||>> @stellaywu what is your current `transformers` or `pytorch-transformers` version? Dear, I have an error: `ImportError: cannot import name 'RobertaForQuestionAnswering' from 'pytorch_transformers'` Actually, I have installed the pytorch_transformers by: `pip install pytorch-transformers` however, the error is occurred. Any idea for this? <|||||>You should upgrade your transformers version, `RobertaForQuestionAnswering` was probably not present in this early a version: ``` !pip install transformers torch from transformers import RobertaForQuestionAnswering ```<|||||>> You should upgrade your transformers version, `RobertaForQuestionAnswering` was probably not present in this early a version: > > ``` > !pip install transformers torch > from transformers import RobertaForQuestionAnswering > ``` Actually, I use pytorch_transformers not transformers. Could you have any suggests?<|||||>Installing version v1.1.0 or v1.2.0 of `pytorch-transformers`, I can also import `RobertaConfig`. RoBERTa was added in v1.1.0, so any version earlier than that will not have it. Is there a reason you're not using `transformers`? Most models are in `transformers`, as are most features, and a lot of bugs have been solved since `pytorch-transformers`.
transformers
1,127
closed
DistilBERT
Preparing the release for DistilBERT (smaller, faster, lighter, cheaper version of BERT)
08-28-2019 07:34:11
08-28-2019 07:34:11
@VictorSanh Thanks for adding this :heart: (I'm currently adding this model to Flair) One question: the BERT model configuration has a key `hidden_size`. For DilBERT it is now `dim`. Is this change intended 🤔<|||||># [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=h1) Report > Merging [#1127](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/d06c5a2a0acd8525d969a8f8f5b968ec0ec110b4?src=pr&el=desc) will **increase** coverage by `1.1%`. > The diff coverage is `96.79%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1127 +/- ## ========================================= + Coverage 79.61% 80.71% +1.1% ========================================= Files 42 46 +4 Lines 6898 7391 +493 ========================================= + Hits 5492 5966 +474 - Misses 1406 1425 +19 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `57.12% <ø> (-0.42%)` | :arrow_down: | | [pytorch\_transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxuZXQucHk=) | `79.01% <ø> (ø)` | :arrow_up: | | [pytorch\_transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxtLnB5) | `86.66% <ø> (ø)` | :arrow_up: | | [pytorch\_transformers/tokenization\_distilbert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100% <100%> (ø)` | | | [pytorch\_transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `87.98% <100%> (ø)` | :arrow_up: | | [...torch\_transformers/tests/tokenization\_bert\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX2JlcnRfdGVzdC5weQ==) | `98.66% <100%> (ø)` | :arrow_up: | | [pytorch\_transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYXV0by5weQ==) | `56.36% <71.42%> (+0.36%)` | :arrow_up: | | [pytorch\_transformers/tests/modeling\_common\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfY29tbW9uX3Rlc3QucHk=) | `94.73% <80%> (-0.21%)` | :arrow_down: | | [...ch\_transformers/tests/tokenization\_dilbert\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX2RpbGJlcnRfdGVzdC5weQ==) | `95.23% <95.23%> (ø)` | | | [pytorch\_transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZGlzdGlsYmVydC5weQ==) | `96.73% <96.73%> (ø)` | | | ... and [7 more](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=footer). Last update [d06c5a2...e7706f5](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
1,126
closed
Bert initialization
When I train bert model from scratch, it can not convergence and the loss does not decrease. Still don't work even try different learning rate many times. But It works when I tried the tf version. I checked code and there have no much difference except the initialization. So does anybody have some ideas about this?
08-28-2019 02:01:59
08-28-2019 02:01:59
Hi, I think you'll find this [particular issue interesting](https://github.com/huggingface/pytorch-transformers/issues/202). [Thomas Wolf's comment](https://github.com/huggingface/pytorch-transformers/issues/202#issuecomment-522613642) in particular may be of help.<|||||>> Hi, I think you'll find this [particular issue interesting](https://github.com/huggingface/pytorch-transformers/issues/202). > > [Thomas Wolf's comment](https://github.com/huggingface/pytorch-transformers/issues/202#issuecomment-522613642) in particular may be of help. Thanks. But I am not saying to train a language model from scratch, I am saying to train a glue task from scratch. So I think there has much difference between this.
transformers
1,125
closed
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1176: character maps to <undefined>
## 🐛 Bug <!-- Important information --> ```UnicodeDecodeError``` when using vocab file generated by ```GPT2Tokenizer```. Specifically, I created an instance of the ```GPT2Tokenizer``` by calling ```from_pretrained('gpt2')``` then saved the vocab and merges file for that instance to a local directory. When creating a new ```GPT2Tokenizer``` from the saved files I encounter a ```UnicodeDecodeError``` when reading from the vocab file. Model I am using (Bert, XLNet....): GPT2 Tokenizer Language I am using the model on (English, Chinese....): N/A The problem arise when using: * [x] the official example scripts: (give details) * [ ] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: ```python pretrained_tokenizer = GPT2Tokenizer.from_pretrained('gpt2') vocab_file, merges_file = pretrained_tokenizer.save_vocabulary('.') new_tokenizer = GPT2Tokenizer(vocab_file, merges_file) # <- UnicodeDecodeError occurs here ``` ## Expected behavior I expect ```new_tokenizer``` to initialize a tokenizer with the same behavior as ```pretrained_tokenizer```. ## Environment * OS: Windows 10 * Python version: 3.6.8 * PyTorch version: 1.1.0 * PyTorch Transformers version (or branch): 1.1.0 ## Additional context seems likely to be a bug during encoding in ```save_vocabulary```
08-27-2019 20:37:09
08-27-2019 20:37:09
Looking at [these lines](https://github.com/huggingface/pytorch-transformers/blob/07681b6b5859b630077b742b2f06d440869f17e3/pytorch_transformers/tokenization_gpt2.py#L108-L115), the issue seems to be the file is encoded in utf-8 but read using a different encoder. Changing line 112 to `self.encoder = json.load(open(vocab_file, 'r', encoding='utf-8'))` should fix this issue.<|||||>Thanks this is fixed on master now with #1074<|||||>Is there a test for which encoder should be used?<|||||>Encountered same error and had the same doubt. Used 'iso-8859-1' as it suits me almost anytime. Worked just fine. @ChebonRunner
transformers
1,124
closed
XLNet resize embedding size ERROR
## ❓ Questions & Help I add new tokens to XLNetLMHeadModel and use resize function ``` tokenizer.add_tokens(["<token1>", "<token2>"]) model.resize_token_embeddings(len(tokenizer)) ``` But when running, the following error occurs: ``` Traceback (most recent call last): ... File "/nas/home/jsun/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "../pytorch_transformers/modeling_xlnet.py", line 1059, in forward logits = self.lm_loss(transformer_outputs[0]) File "/nas/home/jsun/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/nas/home/jsun/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 92, in forward return F.linear(input, self.weight, self.bias) File "/nas/home/jsun/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1410, in linear output += bias RuntimeError: The size of tensor a (32003) must match the size of tensor b (32000) at non-singleton dimension 2 ``` It is because in `resize_token_embeddings`, it changes the embedding size and calls `tie_weight` function to resize the LM head weight. But it forgot to change the size of `bias`, since XLNet has ``` self.lm_loss = nn.Linear(config.d_model, config.n_token, bias=True) ``` while other models have bias=False
08-27-2019 18:52:10
08-27-2019 18:52:10
Should be fixed on master by @LysandreJik's PR!
transformers
1,123
closed
Extracting Features Example
## ❓ Questions & Help Hello. Sorry, if my question is out of date or i just didn't find it, but i'm looking for the example/extract_features.py that was supposed to be in in this repo (as mentioned in this stackoverflow [post](https://stackoverflow.com/questions/55369821/how-to-train-a-neural-network-model-with-bert-embeddings-instead-of-static-embed)) and can't find it anymore. Was it just in an earlier release and got scrapped? Thank you in advance for any help
08-27-2019 15:21:11
08-27-2019 15:21:11
Yes it was removed from the repo. I think we’ll add it again (and update it to pytorch-transformers) since several people have been missing it. Cc @LysandreJik On Tue, 27 Aug 2019 at 17:21, Sascha Stenger <[email protected]> wrote: > ❓ Questions & Help > > Hello. > > Sorry, if my question is out of date or i just didn't find it, but i'm > looking for the example/extract_features.py > that was supposed to be in in this repo (as mentioned in this > stackoverflow post > <https://stackoverflow.com/questions/55369821/how-to-train-a-neural-network-model-with-bert-embeddings-instead-of-static-embed>) > and can't find it anymore. Was it just in an earlier release and got > scrapped? > > Thank you in advance for any help > > — > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-transformers/issues/1123?email_source=notifications&email_token=ABYDIHI6ESI3KSNXXUQZFZDQGVA7BA5CNFSM4IQFZBAKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HHV4OEQ>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/ABYDIHMWWEDMXVWDLZIX4NDQGVA7BANCNFSM4IQFZBAA> > . > <|||||>That's nice to hear. Thank you very much <|||||>Any update on the extract_features script? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,122
closed
PyTorch library dependency
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): XLNet Language I am using the model on (English, Chinese....): English ``` pytorch_transformers\modeling_xlnet.py in forward(self, input_ids, token_type_ids, input_mask, attention_mask, mems, perm_mask, target_mapping, head_mask) 925 # `1` indicates not in the same segment [qlen x klen x bsz] 926 seg_mat = (token_type_ids[:, None] != cat_ids[None, :]).long() --> 927 seg_mat = F.one_hot(seg_mat, num_classes=2).to(dtype_float) 928 else: 929 seg_mat = None AttributeError: module 'torch.nn.functional' has no attribute 'one_hot' ``` ## To Reproduce Steps to reproduce the behavior: 1. Install pytprch 1.0.1 version 2. Use XLNet model to do the prediction. 'torch.nn.functional''s 'one_hot' function introduced from [1.1.0](https://pytorch.org/docs/1.1.0/nn.html#one-hot) while [requirements.txt](https://github.com/huggingface/pytorch-transformers/blob/master/requirements.txt#L2 ) requests 1.0.0+
08-27-2019 14:59:49
08-27-2019 14:59:49
I have the same issue with `pytorch 1.0.1`. When pytorch version is upgraded (to `1.2.0` for instance), this error is removed, however I get an import error: ``` ImportError: /opt/conda/lib/python3.7/site-packages/fused_layer_norm_cuda.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE ``` <|||||>> I have the same issue with `pytorch 1.0.1`. When pytorch version is upgraded (to `1.2.0` for instance), this error is removed, however I get an import error: > > ``` > ImportError: /opt/conda/lib/python3.7/site-packages/fused_layer_norm_cuda.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE > ``` My issue is fixed after upgrading pytorch to 1.2.0. Which model/ function do you call ?<|||||>For some reason, I had to reinstall apex after upgrading pytorch. <|||||>> For some reason, I had to reinstall apex after upgrading pytorch. So have you fixed your ImportError? I met this error when initializing BertAdam. I will try to migrate my code from pytorch-pretrained-bert to pytorch-transformers.<|||||>Yes, upgraded pytorch, then reinstalled apex.<|||||>> Yes, upgraded pytorch, then reinstalled apex. Thank you. I will try.<|||||>Just to confirm, I had the same issue (ImportError). As @tayciryahmed said, re-installing apex would do the trick as it did for me :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,121
closed
Using pretrained XLNET for long sentences
## ❓ Questions & Help Is it possible to feed the pretrained large XLNET model with sentences of length of more than 512 tokens? If no, is there any model which supports that?
08-27-2019 13:23:54
08-27-2019 13:23:54
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,120
closed
Change attention mask dtype to be bool. Fix #1119
08-27-2019 11:19:12
08-27-2019 11:19:12
Yes it's better, thanks also for that @CrafterKolyan!
transformers
1,119
closed
Tons of warnings on use of TransfoXLModel. masked_fill_ input dtype torch.uint8 should be changed to torch.bool
## 🕑 Usage of deprecated behaviour Using example from documentation web page: https://huggingface.co/pytorch-transformers/model_doc/transformerxl.html#pytorch_transformers.TransfoXLModel ``` import torch from pytorch_transformers import * tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103') model = TransfoXLModel.from_pretrained('transfo-xl-wt103') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) outputs = model(input_ids) last_hidden_states, mems = outputs[:2] ``` Get tons of same warnings: > /pytorch/aten/src/ATen/native/LegacyDefinitions.cpp:14: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. ![image](https://user-images.githubusercontent.com/9883873/63766644-4b180c80-c8d4-11e9-890c-9da640b5c0e8.png) Created #1120 to fix it.
08-27-2019 11:11:29
08-27-2019 11:11:29
transformers
1,118
closed
Documentation fix #1117
Rename parameter in documentation + Delete its second occurrence.
08-27-2019 09:22:50
08-27-2019 09:22:50
# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=h1) Report > Merging [#1118](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/e08c01aa1ad63efff83548ea69d5ba3ce4a75acc?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1118 +/- ## ======================================= Coverage 79.61% 79.61% ======================================= Files 42 42 Lines 6898 6898 ======================================= Hits 5492 5492 Misses 1406 1406 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `75.84% <ø> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=footer). Last update [e08c01a...26bda77](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Indeed!
transformers
1,117
closed
Wrong parameter name in documentation
Documentation web page: https://huggingface.co/pytorch-transformers/model_doc/gpt2.html#pytorch_transformers.GPT2DoubleHeadsModel See `Inputs -> multiple_choice_labels`. There is actually no such parameter in `GPT2DoubleHeadsModel.forward` method. It was renamed to `mc_labels`. Also it is presented two times in documentation which seems to be just a copy past error. Please change `multiple_choice_labels` to `mc_labels` and delete second occurrence of this parameter in documentation. Created #1118 to fix documentation. Also you may squash it with fix of #1115
08-27-2019 09:20:09
08-27-2019 09:20:09
Yes!
transformers
1,116
closed
Delete nonexistent parameter from documentation fix #1115
Changed documentation of GPT2Model, GPT2LMHeadModel and GPT2DoubleHeadsModel
08-27-2019 09:11:09
08-27-2019 09:11:09
# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=h1) Report > Merging [#1116](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/e08c01aa1ad63efff83548ea69d5ba3ce4a75acc?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1116 +/- ## ======================================= Coverage 79.61% 79.61% ======================================= Files 42 42 Lines 6898 6898 ======================================= Hits 5492 5492 Misses 1406 1406 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `75.84% <ø> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=footer). Last update [e08c01a...c8933bb](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks @CrafterKolyan!
transformers
1,115
closed
No parameter which is presented in documentation
Documentation web page: https://huggingface.co/pytorch-transformers/model_doc/gpt2.html#pytorch_transformers.GPT2Model See `Inputs -> attention_mask`. There is actually no parameter `attention_mask` in `GPT2Model.forward` method (see https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_gpt2.py#L473) Of course trying to provide `attention_mask` parameter to model raises an exception: > TypeError: forward() got an unexpected keyword argument 'attention_mask' Please either add parameter `attention_mask` to `GPT2Model.forward` or delete it from documentation. Same for https://huggingface.co/pytorch-transformers/model_doc/gpt2.html#pytorch_transformers.GPT2LMHeadModel and for https://huggingface.co/pytorch-transformers/model_doc/gpt2.html#pytorch_transformers.GPT2DoubleHeadsModel I've created #1116 in case you want just delete it from documentation.
08-27-2019 09:03:33
08-27-2019 09:03:33
Indeed, thanks for that!<|||||>Sorry, but I still get this error, and I can see the parameter on the forward functions at the source code. Am I doing something wrong ? Thanks for this amazing contribution!<|||||>Can you open a new issue with details on the error? `attention_mask` has been added to GPT2 now so it's not the same situation.<|||||>Sorry, my problem was that there was no attention_mask parameter on the forward function, but I can see it now. Thanks
transformers
1,114
closed
Does RoBERTa needs input_type_ids as Bert ?
## ❓ Questions & Help Hello, I'm trying to fine-tune RoBERTa for a sentence-pair classification task. With Bert, I used the token_type_ids to identify sentence A and B. But it seems that the Roberta "token_type" Embedding is configured with a dictionnairy of size 1 from what I understand of the model summary : (token_type_embeddings): Embedding(1, 768). So, does RoBERTa needs token_type_ids ? If not, why there is an Embedding layer for token_type_ids ? The documentation of the RobertaModel class omit to talk about the token_type_ids present among the parameter : [modeling_roberta.py](https://github.com/huggingface/pytorch-transformers/blob/e08c01aa1ad63efff83548ea69d5ba3ce4a75acc/pytorch_transformers/modeling_roberta.py#L97). Thank you in advance.
08-27-2019 07:52:14
08-27-2019 07:52:14
RoBERTa does not use `token_type_ids`. We made a choice to still have an embedding layer (which is all zeros, so they don't contribute anything additively) so that we use the exact same implementation as BERT.<|||||>Understood, thanks for the quick answer ! :)
transformers
1,113
closed
[Help] How to do mean/max pooling to get sentence embedding?
Hi, I read a few questions raised before regarding sentence embedding and came across mean/max pooling suggestions. Not too sure how to go about doing mean/max pooling. Is my implementation correct for mean pooling? I simply took the sum of all the token vectors and divide them by total sequence length. ![image](https://user-images.githubusercontent.com/46053996/63739517-04201c00-c8c0-11e9-97ca-259efc8c60d9.png) Thanks :)
08-27-2019 03:46:58
08-27-2019 03:46:58
Hi! Yes this would work, but it would certainly be slower than using [the torch `mean` function](https://pytorch.org/docs/stable/torch.html#torch.mean)!<|||||>Understood thanks! :)
transformers
1,112
closed
Implement the QuickStart but got an error when using BertForMaskedLM to predict a masked token
## ❓ Questions & Help I was running the BERT example following the instruction of pytorch-transformers' docs, but when Iusing BertForMaskedLM to predict a masked token, an error occured: "INFO:pytorch_transformers.modeling_utils:Weights from pretrained model not used in BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']" Any idea how to fix this? Thanks!
08-27-2019 03:33:10
08-27-2019 03:33:10
Hi, this is not an error, but a warning. This warning tells you that some of the weights that were in your pretrained model were not used by the model with which you loaded them. In this case, it concerns the classification layer weight/bias.<|||||>I see, thanks!
transformers
1,111
closed
Can we get a 1.1.1 release so that AutoRoberta is included?
See issue title. I'm about to open a PR to add roberta to allennlp; it'd be nice to have a released version to depend on, instead of a github commit.
08-26-2019 22:52:25
08-26-2019 22:52:25
Yes! @LysandreJik <|||||>Sounds good, we'll release one soon. Probably around the end of the week or early next week.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,110
closed
Torch.hub now based on AutoModels - Updating AutoModels with AutoModelWithLMHead, Sequence Classification and Question Answering
Added new AutoModels, as long as the accompanying tests. Updated the TorchHub configuration file to redirect to those AutoModels.
08-26-2019 20:10:43
08-26-2019 20:10:43
# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=h1) Report > Merging [#1110](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/df9d6effae43e92761eb92540bc45fac846789ee?src=pr&el=desc) will **decrease** coverage by `0.04%`. > The diff coverage is `73.33%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1110 +/- ## ========================================== - Coverage 79.61% 79.56% -0.05% ========================================== Files 42 42 Lines 6898 6965 +67 ========================================== + Hits 5492 5542 +50 - Misses 1406 1423 +17 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/tests/modeling\_auto\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfYXV0b190ZXN0LnB5) | `98.18% <100%> (+2.18%)` | :arrow_up: | | [pytorch\_transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYXV0by5weQ==) | `51.72% <54.54%> (-4.28%)` | :arrow_down: | | [pytorch\_transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdXRpbHMucHk=) | `84.18% <0%> (+0.76%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=footer). Last update [df9d6ef...84a3a96](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Not sure torch.hub will work directly. We should check with @VictorSanh.<|||||>Just checked the integration with Pytorch Hub, it works on my end. For example, you can try it out with: ```python import torch torch.hub.load('huggingface/pytorch-transformers:automodels', 'autoModelWithLMHead', 'distilbert-base-uncased') ```<|||||>Ok, I think we are all good with this. Happy to have `torch.hub` integration again.
transformers
1,109
closed
keeping encoder fixed from pretrained model but changing classifier
Hi I need to pretrain the bert on one dataset and finetune it then on other datasets, so basically removing classifier from first part and substitute it with a new one with the specific number of labels, currently with current codes, it will be error to do it when loading pretrained model, could you please assist me how I can do this? I have a deadline soon and really appreciate your help urgently. thanks Best Julia model = model_class.from_pretrained(args.model_name_or_path, from_tf=bool('.ckpt' in args.model_name_or_path), config=config) File "julia/libs/anaconda3/envs/transformers/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py", line 461, in from_pretrained model.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for BertRUBIForSequenceClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([174, 768]) from checkpoint, the shape in current model is torch.Size([3, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([174]) from checkpoint, the shape in current model is torch.Size([3]).
08-26-2019 18:50:00
08-26-2019 18:50:00
Closing this as it is a duplicate of the issue #1108 you opened 4 hours ago.
transformers
1,108
closed
using BERT as pretraining with custom classifier
Hi I need to pretrain the bert on one dataset and finetune it then on other datasets, so basically removing classifier from first part and substitute it with a new one with the specific number of labels, currently with current codes, it will be error to do it when loading pretrained model, could you please assist me how I can do this? thanks Best Julia model = model_class.from_pretrained(args.model_name_or_path, from_tf=bool('.ckpt' in args.model_name_or_path), config=config) File "julia/libs/anaconda3/envs/transformers/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py", line 461, in from_pretrained model.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for BertRUBIForSequenceClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([174, 768]) from checkpoint, the shape in current model is torch.Size([3, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([174]) from checkpoint, the shape in current model is torch.Size([3]).
08-26-2019 18:33:26
08-26-2019 18:33:26
Hi, it seems you have saved a model with a classification head of dimension `174 x 768`. You're then trying to load this model with a different classification head of dimension `3 x 768`, is that correct? If you are trying to save/load the model without the classification head, you can simply save the BertModel **without the classification head**, and then load it from here.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@LysandreJik : I have the same problem, I trained the ner model and now want to fine tune on other datasets. `Error(s) in loading state_dict for XLMRobertaForTokenClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([9, 768]) from checkpoint, the shape in current model is torch.Size([24, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([9]) from checkpoint, the shape in current model is torch.Size([24])` Can you please let me know how to "If you are trying to save/load the model without the classification head, you can simply save the BertModel without the classification head, and then load it from here."
transformers
1,107
closed
Changing the _read_tsv method in class DataProcessor
## 🚀 Feature I would request changing the class method to the following: @classmethod def _read_tsv(cls, input_file, quotechar=None) """Reads a tab separated value file.""" lines = [] df = pd.read_csv(input_file, delimiter='\t') for line in (df.values): lines.append(line) return lines ## Motivation The reader used to read the files incorrectly.
08-26-2019 18:19:39
08-26-2019 18:19:39
Hi, we need more information about what script you are talking about.<|||||>The file path is: /examples/utils_glue.py The class is DataProcessor(object): @classmethod def _read_tsv(cls, input_file, quotechar=None) """Reads a tab separated value file.""" lines = [] df = pd.read_csv(input_file, delimiter='\t') for line in (df.values): lines.append(line) return lines<|||||>We also need a lot more details and a clear example of why you think the reader used to read the files incorrectly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,106
closed
sample_text.txt is broken (404 ERROR)
## ❓ Questions & Help When I try to access the sample_text,txt file at this link, I find an nginx 404 server error. https://huggingface.co/pytorch-transformers/samples/sample_text.txt
08-26-2019 16:49:01
08-26-2019 16:49:01
Hi, where did you retrieve this link from?<|||||>Hey this is listed on the huggingface.co documentation page here ([https://huggingface.co/pytorch-transformers/examples.html?highlight=sample_text](https://huggingface.co/pytorch-transformers/examples.html?highlight=sample_text))<|||||>This one is now in the tests at: https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/tests/fixtures/sample_text.txt We'll fix the doc, thanks!
transformers
1,105
closed
How to get pooler state's (corresponds to CLS token) attention vector?
The following model definition returns the attention vector for tokens corresponding to the input sequence length, i.e. ```x.size(1)```. How do I procure the attention vector of the pooler state (output embedding corresponding to the CLS token)? ```Python model = BertModel.from_pretrained('bert-base-uncased', output_attentions=True) outputs = self.model(x, attention_mask = x_mask) last_layer_attentions = outputs[2][-1] # [batch_size, num_heads, x.size(1), x.size(1)] # I want the attention vector for pooler state, i.e. [batch_size, num_heads, 1, x.size(1)] ```
08-26-2019 12:30:41
08-26-2019 12:30:41
Hi, the pooler takes as input the last layer hidden-state of the first token of the sentence (the `[CLS]` token). So the attention used to compute the pooler input is just the attention for this token.<|||||>@thomwolf If my understanding is right, the last layer's attention vector should be of size ```[batch_size, num_heads, (x.size(1)+1), (x.size(1)+1)]```, corresponding to the **[CLS]** embedding and x.size(1) token embeddings. However, ```output[2][-1]``` only returns ```[batch_size, num_heads, x.size(1), x.size(1)]``` dimensional attention map, which, I am guessing, corresponding to the input sequence (**x.size(1)**) and not the **[CLS]** token. How do I get the attention vector corresponding to the **[CLS]** token? Also, can you mention which of the two **x.size(1)** axes corresponds to the input layer and the output layer?
transformers
1,104
closed
TensorFlow 2.0 - Testing with a few Bert architectures
This PR tests how easy it would be to incorporate TF 2.0 models in the current library: - adds a few models: `TFBertPreTrainedModel`, `TFBertModel`, `TFBertForPretraining`, `TFBertForMaskedLM`, `TFBertForNextSentencePrediction`, - weights conversion script to convert the PyTorch weights (only the `bert-base-uncased` model is up on our AWS S3 bucket for the moment), - a few tests. The library is (very) slightly reorganized to allow for this, mostly by spinning configuration classes out of (PyTorch) modeling classes to allow reusability between PyTorch and TF 2.0 models. With TF 2.0 Keras imperative interface and Eager, the workflow and models are suprisingly similar: ```python import numpy import torch import tensorflow as tf from pytorch_transformers import BertModel, TFBertModel, BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') pytorch_model = BertModel.from_pretrained('bert-base-uncased') tf_model = TFBertModel.from_pretrained('bert-base-uncased') text = "[CLS] Who was Jim Henson ? Jim [MASK] was a puppeteer [SEP]" tokens = tokenizer.encode(text) pytorch_inputs = torch.tensor([tokens]) tf_inputs = tf.constant([tokens]) with torch.no_grad(): pytorch_outputs = pytorch_model(pytorch_inputs) tf_output = tf_model(tf_inputs, training=False) numpy.amax(numpy.abs(pytorch_outputs[0].numpy() - tf_output[0].numpy())) # >>> 2.861023e-06 => we are good, a few 1e-6 is the expected difference # between TF and PT arising from internals computation ops ``` If you want to play with this, you can install from the `tf` branch like this: - install TF 2.0: `pip install tensorflow==2.0.0-rc0` - install pytorch-transformers from the `tf` branch: `pip install https://github.com/huggingface/pytorch-transformers/archive/tf.zip`
08-26-2019 11:42:28
08-26-2019 11:42:28
# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104?src=pr&el=h1) Report > Merging [#1104](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/df9d6effae43e92761eb92540bc45fac846789ee?src=pr&el=desc) will **decrease** coverage by `0.49%`. > The diff coverage is `81.56%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1104 +/- ## ========================================= - Coverage 79.61% 79.12% -0.5% ========================================= Files 42 56 +14 Lines 6898 7654 +756 ========================================= + Hits 5492 6056 +564 - Misses 1406 1598 +192 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/tests/modeling\_xlnet\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfeGxuZXRfdGVzdC5weQ==) | `95.91% <100%> (+0.02%)` | :arrow_up: | | [pytorch\_transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `55.2% <100%> (-2.33%)` | :arrow_down: | | [pytorch\_transformers/tests/modeling\_xlm\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfeGxtX3Rlc3QucHk=) | `71.2% <100%> (+0.23%)` | :arrow_up: | | [pytorch\_transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfb3BlbmFpLnB5) | `73.35% <100%> (-1.42%)` | :arrow_down: | | [pytorch\_transformers/tests/modeling\_auto\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfYXV0b190ZXN0LnB5) | `96.15% <100%> (+0.15%)` | :arrow_up: | | [pytorch\_transformers/tests/modeling\_gpt2\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfZ3B0Ml90ZXN0LnB5) | `85% <100%> (+0.78%)` | :arrow_up: | | [pytorch\_transformers/tests/conftest.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvY29uZnRlc3QucHk=) | `91.66% <100%> (+1.66%)` | :arrow_up: | | [pytorch\_transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `74.74% <100%> (-1.1%)` | :arrow_down: | | [pytorch\_transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfcm9iZXJ0YS5weQ==) | `75.45% <100%> (-0.44%)` | :arrow_down: | | [pytorch\_transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxuZXQucHk=) | `78.04% <100%> (-0.98%)` | :arrow_down: | | ... and [39 more](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104?src=pr&el=footer). Last update [df9d6ef...3231797](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1104?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
1,103
closed
Roberta semantic similarity
## ❓ Questions & Help Hi I am trying to use Roberta for semantic similarity. have 2 questions Can you validate my code to check if its able to correctly execute sentence-pair classification ? I want to train the roberta-large-mnli model on my own corpus, how do I do this? Code : ```python from pytorch_transformers import RobertaModel, RobertaTokenizer from pytorch_transformers import RobertaForSequenceClassification, RobertaConfig config = RobertaConfig.from_pretrained('roberta-large') config.num_labels = len(list(label_to_ix.values())) tokenizer = RobertaTokenizer.from_pretrained('roberta-large-mnli') model = RobertaForSequenceClassification(config) def prepare_features(seq_1,seq_2): aa=tokenizer.encode(seq_1) bb=tokenizer.encode(seq_2) zz=tokenizer.add_special_tokens_sentences_pair(aa,bb) input_ids=torch.tensor(zz) input_mask = [1] * len(zz) return torch.tensor(input_ids).unsqueeze(0), input_mask class Intents(Dataset): def __init__(self, dataframe): self.len = len(dataframe) self.data = dataframe def __getitem__(self, index): utterance = self.data.q1[index] sent2 = self.data.q2[index] label = self.data.label[index] X, _ = prepare_features(utterance,sent2) y = label_to_ix[self.data.label[index]] return X, y def __len__(self): return self.len train_size = 0.95 train_dataset=dataset.sample(frac=train_size,random_state=200).reset_index(drop=True) test_dataset=dataset.drop(train_dataset.index).reset_index(drop=True) training_set = Intents(train_dataset) testing_set = Intents(test_dataset) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.cuda() # Parameters params = {'batch_size': 1, 'shuffle': True, 'num_workers': 1} training_loader = DataLoader(training_set, **params) testing_loader = DataLoader(testing_set, **params) loss_function = nn.CrossEntropyLoss() learning_rate = 1e-05 optimizer = optim.Adam(params = model.parameters(), lr=learning_rate) max_epochs = 3 model = model.train() for epoch in tqdm_notebook(range(max_epochs)): print("EPOCH -- {}".format(epoch)) for i, (sent, label) in enumerate(training_loader): optimizer.zero_grad() sent = sent.squeeze(0) if torch.cuda.is_available(): sent = sent.cuda() label = label.cuda() output = model.forward(sent)[0] _, predicted = torch.max(output, 1) loss = loss_function(output, label) loss.backward() optimizer.step() if i%100 == 0: correct = 0 total = 0 for sent, label in testing_loader: sent = sent.squeeze(0) if torch.cuda.is_available(): sent = sent.cuda() label = label.cuda() output = model.forward(sent)[0] _, predicted = torch.max(output.data, 1) total += label.size(0) correct += (predicted.cpu() == label.cpu()).sum() accuracy = 100.00 * correct.numpy() / total print('Iteration: {}. Loss: {}. Accuracy: {}%'.format(i, loss.item(), accuracy)) def get_reply(msg,msg1): model.eval() input_msg, _ = prepare_features(msg,msg1) if torch.cuda.is_available(): input_msg = input_msg.cuda() output = model(input_msg) ``` Thanks
08-26-2019 11:19:29
08-26-2019 11:19:29
Hi, the provided `run_glue` example shows how to train/use `RoBERTa` for sentence pairs classification on the GLUE tasks (including MNLI).<|||||>Hi, Thanks for your help I have executed the run_glue.py file on my custom data set by using the following command ` python run_glue.py --model_type roberta --model_name_or_path roberta-large-mnli --task_name=mnli --do_train --do_eval --do_lower_case --data_dir=input_roberta/ --max_seq_length 28 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --output_dir=output_roberta/ --save_steps=350 --overwrite_output_dir --overwrite_cache ` input_roberta has my train.tsv (custom sentence pair corpus with labels) Model gets trained, however it asks for dev_matched and dev_mismtached files, How do i provide this? How do i predict test sentence-pair questions using the generated model weights ? Thanks<|||||>However with above warning model has generated the weights and other files. Using them I have loaded the model using the below code: `output_model_file = './output_roberta/pytorch_model.bin' output_config_file = "./output_roberta/config.json" output_vocab_file = "./output_roberta/vocab.json" config = RobertaConfig.from_json_file(output_config_file) model = RobertaForSequenceClassification(config) state_dict = torch.load(output_model_file) model.load_state_dict(torch.load(model_path)) tokenizer =RobertaTokenizer(output_vocab_file,merges_file="./output_roberta/merges.txt") aa=tokenizer.encode("what is my sales") bb=tokenizer.encode("top store by net sales") zz=tokenizer.add_special_tokens_sentences_pair(aa,bb) input_ids=torch.tensor(zz).unsqueeze(0) model.eval() output = model(input_ids)` output : (tensor([[-5.2188, 2.2234, 2.4296]], grad_fn=<AddmmBackward>),) ​For any sentence pair it gives the same output as above. Can you please help? Thanks <|||||>Hi Is there any update on this ?<|||||>Have figured out the solution.<|||||>Hi @subhamkhemka, what was the solution you found?<|||||>Hey @julien-c I switched to the fairseq implementation of roberta. Using train.py to fine tune using roberta mnli weights
transformers
1,102
closed
Wrong documentation example for RoBERTa
Documentation web page: https://huggingface.co/pytorch-transformers/model_doc/roberta.html#pytorch_transformers.RobertaModel See `Inputs -> input_ids`: `tokens: [CLS] is this jack ##son ##ville ? [SEP][SEP] no it is not . [SEP]` and `tokens: [CLS] the dog is hairy . [SEP]` are wrong examples. Because `RobertaTokenizer` says that its `cls_token` is `<cls>` not `[CLS]`. Same for `sep_token`: it is `<sep>`, not `[SEP]`. Using tokens `[CLS]` and `[SEP]` doesn't create any special token ids which causes errors when you try to use your encoded input in model. Adding the `add_special_tokens=True` to `encode` of course helps but you've added 2 extra tokens `[CLS]` and `[SEP]` to the input which are not known by model and possibly can decrease its quality. Please change `[CLS]` and `[SEP]` to `<cls>` and `<sep>`.
08-26-2019 10:59:20
08-26-2019 10:59:20
Hi, thank you for the bug report. It has been [changed](https://huggingface.co/pytorch-transformers/model_doc/roberta.html).
transformers
1,101
closed
evaluate bert on Senteval dataset
Hi I would like to evaluate bert on senteval datasets, with Senteval, I am not sure how to do it, Do you provide any evaluation toolkit to evaluate the trained models? thanks
08-26-2019 09:51:09
08-26-2019 09:51:09
This should help you: https://medium.com/dsnet/running-pytorch-transformers-on-custom-datasets-717fd9e10fe2 I did it for IMDB dataset which you should be able to customize for any other dataset.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,100
closed
Writing predictions in a separate output file
## 🚀 Feature Request for providing the final predictions (and probabilities of each class for classification task) on the validation/test set in a separate .txt or .json file ## Motivation Since many of us will be using the provided models (RoBERTa, XLnet, BERT etc.) on various other NLP tasks and we will be probably using custom evaluation functions for different tasks, it would be very helpful if the final output predictions on val/test set cab be written in a separate .txt or .json output file. For example, the original BERT tensorflow codes (https://github.com/google-research/bert) writes the final predictions of GLUE tasks in an output file "eval_results.txt" and "predictions.json" for SQuAD. ## Additional context I have printed the predictions from the function "evaluate(args, model, tokenizer, prefix="")" in line 189 of run_glue.py but I found the sequence of predictions is not the same as the input validation file. I will hopefully resort the original sequence of predictions in run_glue.py but I think I might have to do more than this in reading comprehension model predictions for SQuAD. I feel many of the users would be looking for this feature and it might help many others if everyone doesnt have to edit evaluate functions on their own individually. Looking forward for your kind response and thanks for the help.
08-26-2019 01:50:17
08-26-2019 01:50:17
Solved it, apologies for raising this silly request.<|||||>Hi, how did u solve this?<|||||>Hi, I would like to know how can I do it. Thanks<|||||>You can access to label predictions thanks to the variable "preds" (line 318 after squeeze function). You can save it in a text file in a similar way of the line 323.
transformers
1,099
closed
Missing RobertaForMultipleChoice
Hi, It seems like a `RobertaForMultipleChoice` class should exist to parallel `BertForMultipleChoice`. Or was there a particular reason it was elided?
08-26-2019 01:26:10
08-26-2019 01:26:10
Hi @malmaud, no particular reason. But it's also super easy to just implement your own classifier on top of the model (and then you have full control)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,098
closed
Support multiprocessing when loading pretrained weights
## 🐛 Bug So this is an issue that probably won't crop up for *too* many people, but there's a synchronization issue in loading pretrained weights if doing so in a multiprocess setting if they are not present in the cache. For context, I'm trying to use `torch.distributed.launch` and doing so inside a fresh docker container which doesn't have cached weights. When doing this, each process looks for the weights in the cache and starts downloading them. They then all try to copy the files to the same place. I suppose `shutil.copyfileobj` is not thread-safe, because this leads to a corrupted weight file. A simple, easy solution would be to add a check _after_ the file is downloaded as well. So you could wrap [these lines in `pytorch_transformers/file_utils.py`](https://github.com/huggingface/pytorch-transformers/blob/df9d6effae43e92761eb92540bc45fac846789ee/pytorch_transformers/file_utils.py#L252-L262) in a second condition like this: ```python if not os.path.exists(cache_path): # Download File if not os.path.exists(cache_path): # second check for multiprocessing # Copy to cache_path ``` A better solution might be to detect the multiprocessing and only download the file once? I think `torch.distributed` could help here, but it would probably be hard to handle all the possible use cases.
08-25-2019 22:46:37
08-25-2019 22:46:37
Hi! Indeed, you have to be careful when downloading the models in a multiprocessing manner so that you do not download them several times. You can see how we do it in our examples (like this [run_glue example)](https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_glue.py#L429-L438), where we manage it with the `barriers` that come with `torch.distributed`.<|||||>Yup, found that right before you commented 😄 Is there a reasonable way to include this within the download script itself? Or a place in the README to mention this? If not, feel free to close.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,097
closed
modifying config
Hi I need to add more variables to the config file, while using pretrained models. I could not figure this out how to add parameters to config file, could you provide me please with examples? very much appreciated!
08-25-2019 18:26:51
08-25-2019 18:26:51
Hi! What kind of values are you trying to add? The configuration file is a simple python object, so you can handle it just as you would any Python object: ``` config = GPT2Config.from_pretrained("gpt2") config.values = [1, 2] print(config.values) # [1, 2] ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,096
closed
Temporary fix for RoBERTa's mismatch of vocab size and embedding size - issue #1091
I added an optional input argument so you can pass the starting index when adding new tokens.
08-25-2019 16:39:12
08-25-2019 16:39:12
# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1096?src=pr&el=h1) Report > Merging [#1096](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1096?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/df9d6effae43e92761eb92540bc45fac846789ee?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `85.71%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1096/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1096?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1096 +/- ## ========================================== + Coverage 79.61% 79.62% +<.01% ========================================== Files 42 42 Lines 6898 6900 +2 ========================================== + Hits 5492 5494 +2 Misses 1406 1406 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1096?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1096/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3V0aWxzLnB5) | `86.31% <85.71%> (+0.08%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1096?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1096?src=pr&el=footer). Last update [df9d6ef...9a950be](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1096?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks @amirsaffari. We actually don't need this fix anymore. @LysandreJik has updated the vocabulary on the AWS S3 bucket to include the missing (unused) tokens (called `makeupword0000`, `makeupword0001` and `makeupword0002`). So that the vocabular now has the same length as the last token index. We're adding a test as well. If you have the latest release you can delete your cached vocabulary to download the updated version. If you have installed from master, you can just force the download and overwriting of the new vocabulary with `tokenizer = RobertaTokenizer.from_pretrained('your-model', force_download=True)` <|||||>👍
transformers
1,095
closed
some words not in xlnet vocabulary ,especially name
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I face the problem when xlnet tokenizer encodes name from pytorch_transformers import * import torch tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased') model = XLNetModel.from_pretrained('xlnet-base-cased') sents = ["here is a dog","michael love mary <pad>"] input_ids =[tokenizer.encode(sent) for sent in sents] model(torch.tensor(input_ids))[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: expected sequence of length 4 at dim 1 (got 8) I think the reason is that xlnet can't encode the name "michael " and "mary" , instead it encodes "michael" to "_mi" "cha" "el" and mary to "m" "ary" respectively . And this result leads input_ids to has two different length id list . As the result , when I fed the torch.tensor(input_ids) to XLNetModel would cause "ValueError" problem , how could I fix it? input_ids [[193, 27, 24, 2288], [12430, 2409, 530, 564, 17, 98, 1449, 5]] [tokenizer.tokenize(sent) for sent in sents] [['▁here', '▁is', '▁a', '▁dog'], ['▁mi', 'cha', 'el', '▁love', '▁', 'm', 'ary', '<pad>']]
08-25-2019 03:10:49
08-25-2019 03:10:49
Hi! XLNet uses a SentencePiece tokenizer which splits the words into subword units. In your case, it splits the two examples in different sized sequences, which can't be parsed as a Tensor which requires an input matrix (and not a list of lists). You should pad your sequences after they have been tokenized so that they all are of equal size. Then they can be converted to a tensor and fed to the model.<|||||>> Hi! > > XLNet uses a SentencePiece tokenizer which splits the words into subword units. In your case, it splits the two examples in different sized sequences, which can't be parsed as a Tensor which requires an input matrix (and not a list of lists). > > You should pad your sequences after they have been tokenized so that they all are of equal size. Then they can be converted to a tensor and fed to the model. Thanks for your reply. My purpose is to fed each word's contextual embedding to transformer layer to obtain sentence embedding However , by this way I can't get each word's contextual embedding , is there any solution to fix this problem or just use xlnet function to get sentence embedding?<|||||>If you're looking to create sentence embeddings based on transformers, I'd like to redirect you to [this issue](https://github.com/huggingface/pytorch-transformers/issues/876) that discusses exactly this. It discusses the use of the [UKPLab sentence-transformers library](https://github.com/UKPLab/sentence-transformers) which is built on top of our library and which can provide sentence embeddings based on XLNet.
transformers
1,094
closed
Performing MRPC task after Fine Tuning
## ❓ Questions & Help Sorry if this is really basic; I'm new to BERT and machine learning in general. I want to perform the MRPC task. I went ahead and did the fine-tuning and got the files/model okay. But now that I have this fine-tuned model, I'm confused how to do the actual MRPC task (i.e. given two sentences, produce a 1 if they are paraphrases or a 0 if they are not). I think that I generally have the setup correct (see the code below), but my main problem is what to do with the tuple that is produced from the model. How do you turn that tuple output into the desired 0 or 1? Thank you in advance for the help! Code: ``` import torch from pytorch_transformers import (BertForSequenceClassification, BertTokenizer) #Creating two sentences to compare sen1 = "I made a discovery." sen2 = "I discovered something." #Creating the tokenizer and model fine_tuned_model_loc = '../pytorch-transformers/tmp/MRPC' tokenizer = BertTokenizer.from_pretrained(fine_tuned_model_loc) model = BertForSequenceClassification.from_pretrained(fine_tuned_model_loc) #Prepare tokenized input sen1_tokens = ["[CLS]"] + tokenizer.tokenize(sen1) + ["[SEP]"] sen2_tokens = tokenizer.tokenize(sen2) + ["[SEP]"] indexed_tokens = tokenizer.convert_tokens_to_ids(sen1_tokens + sen2_tokens) token_type_ids = [0]*len(sen1_tokens) + [1]*len(sen2_tokens) attention_mask = [1]*len(sen1_tokens + sen2_tokens) #Turning things into a tensor tokens_tensor = torch.tensor([indexed_tokens]) ids_tensor = torch.tensor([token_type_ids]) attention_tensor = torch.tensor([attention_mask]) #Run the model on the given info model.eval() with torch.no_grad(): output = model(input_ids=tokens_tensor, token_type_ids=ids_tensor, \ attention_mask=attention_tensor) ```
08-24-2019 19:25:53
08-24-2019 19:25:53
You can refer the code to run inference which I had written for sentiment classfication. HTH. https://github.com/nikhilno1/nlp_projects/blob/master/pytorch-transformers-extensions/examples/run_inference.py<|||||>Ah, that was exactly what I needed; thank you! One final thing, though: I'm still a bit confused on exactly what the "labels" are they you put into the model. Looking at your code, it seems like they can either have a value of "0" or "1", but I'm confused when it should be one over the other. Or does that not really matter when you are doing inference?<|||||>It does not matter. It is just a dummy input.<|||||>Alright! Thanks again for all of your help!
transformers
1,093
closed
fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at7getTypeERKNS_6TensorE
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): BERT Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [ * ] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ * ] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: ubuntu 18.04 * Python version: 3.6 * PyTorch version: 1.1 * PyTorch Transformers version (or branch): master * Using GPU ? Yes * Distributed of parallel setup ? Yes * Any other relevant information: No ## Additional context <!-- Add any other context about the problem here. --> Traceback (most recent call last): File "./src/run_experiments.py", line 97, in <module> run_all_tasks(parameters.config) File "/workspace/code/src/utils/util.py", line 37, in wrapped_func func(*args, **kwargs) File "./src/run_experiments.py", line 84, in run_all_tasks trainer = Trainer(opt) File "/workspace/code/src/trainer.py", line 76, in __init__ self._model = DecisionMaker(self._opt["model"], numb) File "/workspace/code/src/model.py", line 43, in __init__ self._intt = Intuition(self._opt["intt"]) File "/workspace/code/src/intteng/intt.py", line 16, in __init__ config=config File "/opt/conda/lib/python3.6/site-packages/pytorch_transformers/modeling_utils.py", line 403, in from_pretrained model = cls(config) File "/opt/conda/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 650, in __init__ self.embeddings = BertEmbeddings(config) File "/opt/conda/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 253, in __init__ self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps) File "/opt/conda/lib/python3.6/site-packages/apex/normalization/fused_layer_norm.py", line 127, in __init__ fused_layer_norm_cuda = importlib.import_module("fused_layer_norm_cuda") File "/opt/conda/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 658, in _load_unlocked File "<frozen importlib._bootstrap>", line 571, in module_from_spec File "<frozen importlib._bootstrap_external>", line 922, in create_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed ImportError: /opt/conda/lib/python3.6/site-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at7getTypeERKNS_6TensorE
08-24-2019 09:36:18
08-24-2019 09:36:18
I am afraid we won't be able to help you if you do not provide any information on what caused the problem.<|||||>> I am afraid we won't be able to help you if you do not provide any information on what caused the problem. I am sorry that I provided the incompleted information. I have sovled this problem by changing the cuda version.<|||||>@xijiz which version did you change to? and which version did you use? Because I have this same issue with CUDA 10
transformers
1,092
closed
Added cleaned configuration properties for tokenizer with serialization - improve tokenization of XLM
This PR improve the tokenization of XLM. It's mostly the same as the [preprocessing](https://github.com/facebookresearch/XLM/blob/master/tools/tokenize.sh) in the original XLM. This PR also add `use_lang_emb` to config of XLM model, which makes adding the newly release [XLM-17 & XLM-100](https://github.com/facebookresearch/XLM#pretrained-cross-lingual-language-models) easier since both of them don't have language embedding. Details on tokenization: - Introduce API change: Changing `XLMTokenizer.tokenize(self, text)` to `XLMTokenizer.tokenize(text, lang='en')` - New dependency: - [sacremoses](https://github.com/alvations/sacremoses): port of Moses - New optional dependencies: - [pythainlp](https://github.com/PyThaiNLP/pythainlp): Thai tokenizer - [kytea](https://github.com/chezou/Mykytea-python): Japanese tokenizer, wrapper of [KyTea](https://github.com/neubig/kytea) (Need external C++ compilation), used by the newly release XLM-17 & XLM-100 - [jieba](https://github.com/fxsjy/jieba): Chinese tokenizer * \* XLM used Stanford Segmenter. However, the wrapper (`nltk.tokenize.stanford_segmenter`) are slow due to JVM overhead, and it will be deprecated. Jieba is a lot faster and pip-installable. But there is some mismatch with the Stanford Segmenter. A workaround could be having an argument to allow users to segment the sentence by themselves and bypass the segmenter. As a reference, I also include `nltk.tokenize.stanford_segmenter` in this PR. Example of tokenization difference could be found [here](https://colab.research.google.com/drive/1nY930H2dhz3IlFvDgU9ycgfm2-DpvRcT).
08-24-2019 01:52:48
08-24-2019 01:52:48
# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1092?src=pr&el=h1) Report > Merging [#1092](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1092?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/df9d6effae43e92761eb92540bc45fac846789ee?src=pr&el=desc) will **increase** coverage by `0.09%`. > The diff coverage is `78.2%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1092/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1092?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1092 +/- ## ========================================== + Coverage 79.61% 79.71% +0.09% ========================================== Files 42 42 Lines 6898 7010 +112 ========================================== + Hits 5492 5588 +96 - Misses 1406 1422 +16 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1092?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1092/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxtLnB5) | `86.73% <100%> (+0.07%)` | :arrow_up: | | [pytorch\_transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1092/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2JlcnQucHk=) | `95.63% <100%> (+0.79%)` | :arrow_up: | | [pytorch\_transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1092/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3V0aWxzLnB5) | `86.49% <100%> (+0.26%)` | :arrow_up: | | [pytorch\_transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1092/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3hsbS5weQ==) | `83.4% <74.43%> (+0.33%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1092?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1092?src=pr&el=footer). Last update [df9d6ef...3871b8a](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1092?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi @shijie-wu, So I've taken advantage of this PR to add a clean mechanism to set, save and reload tokenizer configurations. This should fix in particular a recurring issue mentioned in #1158 and #1026 (failing to reload the lower casing configuration of the tokenizer) but more generally this is essential now for XLM's more complex language configuration. Hope you don't mind me highjacking the PR.<|||||>Ok I think this is good to go. Let's merge it.
transformers
1,091
closed
Problem with mask token id in RoBERTa vocab
Hi! While looking into RoBERTa vocab files I came across the following issue: There are only 50262 words in the vocab, but `<mask>` token is assigned to index 50264. In most cases, this will not lead to any problems, because the embedding matrix has 50265 embeddings. However, if I try adding several new tokens to the vocab, their indices will start from len(tokenizer) = 50262, and two different tokens will end up assigned to the same index. Here is a small example: ``` from pytorch_transformers import RobertaTokenizer tokenizer = RobertaTokenizer.from_pretrained('roberta-base') print(len(tokenizer)) # length is 50262 tokenizer.add_tokens(['token_50262', 'token_50263', 'token_50264']) print(tokenizer.convert_tokens_to_ids(['token_50264'])) # this is 50264 print(tokenizer.convert_tokens_to_ids(['<mask>'])) # this is also 50264 ``` Update: I've checked RoBERTA's vocab in fairseq and they have tokens `madeupword0000`, `madeupword0001`, `madeupword0002` at indices 50261-50263. Apparently, they were added to make vocab size a multiple of 8, but for some reason it was done before adding `<mask>` token to the vocab.
08-23-2019 18:54:38
08-23-2019 18:54:38
Just encountered this. You can verify the mismatch in dictionary sizes with: ```Python import pytorch_transformers as ptt tokeniser = ptt.RobertaTokenizer.from_pretrained('roberta-base') encoder = ptt.RobertaModel.from_pretrained('roberta-base') print(len(tokeniser)) print(encoder.embeddings.word_embeddings.weight.shape) ``` which right now results in ```Python 50262 torch.Size([50266, 768]) ```<|||||>Added [a temporary fix](https://github.com/huggingface/pytorch-transformers/pull/1096) where you can pass the starting index for ids ```Python import pytorch_transformers as ptt tokeniser = ptt.RobertaTokenizer.from_pretrained('roberta-base') encoder = ptt.RobertaModel.from_pretrained('roberta-base') print(len(tokeniser)) print(encoder.embeddings.word_embeddings.weight.shape) ids_start = encoder.embeddings.word_embeddings.weight.shape[0] special_tokens = ['<t1>', '<t2>', '<t3>', '<t4>'] num_added_tokens = tokeniser.add_special_tokens({'additional_special_tokens': special_tokens}, ids_start=ids_start) encoder.resize_token_embeddings(ids_start + num_added_tokens) print(len(tokeniser)) print(encoder.embeddings.word_embeddings.weight.shape) ``` ```Python 50262 torch.Size([50265, 768]) 50266 torch.Size([50269, 768]) ``` Now the new tokens get their unique ids and id for `<mask>` stays the same as before.<|||||>Hi, thanks for the bug report! There was indeed a problem with the tokenizer and missing indices. I added the missing tokens to the vocab file this morning, so you shouldn't have these problems anymore. Let me know if you still have issues.<|||||>As mentioned in #1096, this should now be definitively fixed. @LysandreJik has updated the vocabulary on the AWS S3 bucket to include the missing (unused) tokens (called makeupword0000, makeupword0001 and makeupword0002). So that the vocabulary now has the same length as the last token index. We're adding a test as well. If you have the latest release you can delete your cached vocabulary to download the updated version. If you have installed from master, you can just force the download and overwriting of the new vocabulary with `tokenizer = RobertaTokenizer.from_pretrained('your-model', force_download=True)`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
1,090
closed
No such file or directory: '..\\VERSION'
## 🐛 Bug <!-- Important information --> While trying to install `pytorch-transformers` I get the following error: ``` ERROR: Command errored out with exit status 1: command: 'c:\users\pawel.lonca\appdata\local\programs\python\python35\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\PAWEL~1.LON\\AppData\\Local\\Temp\\pip-install-b5eog20_\\sentencepiece\\setup.py'"'"'; __file__='"'"'C:\\Users\\PAWEL~1.LON\\AppData\\Local\\Temp\\pip-install-b5eog20_\\sentencepiece\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base pip-egg-info cwd: C:\Users\PAWEL~1.LON\AppData\Local\Temp\pip-install-b5eog20_\sentencepiece\ Complete output (7 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\PAWEL~1.LON\AppData\Local\Temp\pip-install-b5eog20_\sentencepiece\setup.py", line 29, in <module> with codecs.open(os.path.join('..', 'VERSION'), 'r', 'utf-8') as f: File "c:\users\pawel.lonca\appdata\local\programs\python\python35\lib\codecs.py", line 895, in open file = builtins.open(filename, mode, buffering) FileNotFoundError: [Errno 2] No such file or directory: '..\\VERSION' ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ``` Google search suggests that the recommended solution is upgrading the `setuptools` but it didn't work in my case. ## Environment * OS: Windows 10 * Python version: 3.5.2 * PyTorch version: 1.2.0+cpu
08-23-2019 13:32:42
08-23-2019 13:32:42
The same bug occurs when installing.<|||||>This looks like an issue with sentencepiece and python 3.5. Do you want to have a look there maybe? https://github.com/google/sentencepiece<|||||>> This looks like an issue with sentencepiece and python 3.5. Do you want to have a look there maybe? https://github.com/google/sentencepiece Python version maybe the issue. I switched to python 3.6 and successfully installed it.<|||||>I'm getting the same issue. Changing to 3.6 or 3.7 did not fix it.<|||||>Download the wheel file from https://github.com/google/sentencepiece/releases for your python version and install it with pip install sentencepiece-xxx-cpxx-xx.whl<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> Download the wheel file from https://github.com/google/sentencepiece/releases for your python version and install it with > pip install sentencepiece-xxx-cpxx-xx.whl this trick works fantastically, many thanks!
transformers
1,089
closed
change layernorm code to pytorch's native layer norm
The current code basically recreates pytorch's native [LayerNorm](https://pytorch.org/docs/stable/nn.html#layernorm) code. The only difference is that the default eps in the pytorch function is 1e-5 instead of 1e-12. PyTorch's native version is optimized for cudnn so it should be faster than this version.
08-23-2019 10:18:23
08-23-2019 10:18:23
# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1089?src=pr&el=h1) Report > Merging [#1089](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1089?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/e00b4ff1de0591d5093407b16e665e5c86028f04?src=pr&el=desc) will **increase** coverage by `0.04%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1089/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1089?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #1089 +/- ## ========================================== + Coverage 79.61% 79.66% +0.04% ========================================== Files 42 42 Lines 6898 6898 ========================================== + Hits 5492 5495 +3 + Misses 1406 1403 -3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1089?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pytorch\_transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1089/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `87.98% <ø> (ø)` | :arrow_up: | | [pytorch\_transformers/file\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1089/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvZmlsZV91dGlscy5weQ==) | `73.94% <0%> (+2.11%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1089?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1089?src=pr&el=footer). Last update [e00b4ff...e13465f](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1089?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I think your PR misses the point though? The models need to be 100% accurate reproductions of the Tensorflow code, right down to differences in eps values. Otherwise if you run the activations and get different results, you don't know whether there's a bug. You also can't reason about different results, and whether they matter.<|||||>@honnibal but looking at the code, every call of `BertLayerNorm` explicitly sets the eps, thus the actual values used in the BERT models does not change. Only the default value, but this default value is never used. Additionally, if APEX is available then you use `FusedLayerNorm`, which uses the [same default eps](https://github.com/NVIDIA/apex/blob/master/apex/normalization/fused_layer_norm.py#L70) of 1e-5 as the pytorch default `LayerNorm`. So you already have an inconsistency, but you solved this by explicitly setting the eps every time you use the layer.<|||||>Oh right! Fair point, sorry.<|||||>Yes @dhpollack is right we can switch to PyTorch official LayerNorm. What made me reimplement the LayerNorm when I was working on Bert last year was actually a typo in PyTorch's doc formula for computing the LayerNorm which indicated, at that time, that the epsilon was added to the square root of the variance instead of being added to the variance it-self. This typo is now corrected in https://github.com/pytorch/pytorch/pull/8545. Everything is right and we can drop these custom LayerNorms.<|||||>Are we sure the names of the parameters are the same though? (`eps` vs. `variance_epsilon`)
transformers
1,088
closed
❓ Why in `run_squad.py` using XLNet, CLS token is not set at the end ?
## ❓ Questions & Help [This line](https://github.com/huggingface/pytorch-transformers/blob/e00b4ff1de0591d5093407b16e665e5c86028f04/examples/run_squad.py#L292) of the file `run_squad.py` create the features for the dataset. No matter which model is used (BERT or XLNet), the function will create the format : > CLS A SEP B SEP But for XLNet case, we want : > A SEP B SEP CLS --- **Isn' it wrong ? Did I miss something ?**
08-23-2019 01:53:29
08-23-2019 01:53:29
Humm I think you are right. The SquAD example looks a bit broken in pytorch-transformers, we will have to review it @LysandreJik.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.